The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements) and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s) and fast (6°/s) speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5).
People with migraine are relatively poor at judging the direction of motion of coherently moving signal dots when interspersed with noise dots drifting in random directions, a task known as motion coherence. Although this has been taken as evidence of impoverished global pooling of motion signals, it could also arise from unreliable coding of local direction (of each dot), or an inability to segment signal from noise (noise-exclusion). The aim of this study was to determine how these putative limits contribute to impoverished motion processing in migraine.
Sensitivity to visual numerosity has previously been shown to predict human mathematical performance. However, it is not clear whether it is discrimination of numerosity per se that is predictive of mathematical ability, or whether the association is driven by more general task demands. To test this notion we had over 300 participants (ranging in age from 6 to 73 years) perform a symbolic mathematics test and 4 different visuospatial matching tasks. The visual tasks involved matching 2 clusters of Gabor elements for their numerosity, density, size or orientation by a method of adjustment. Partial correlation and regression analyses showed that sensitivity to visual numerosity, sensitivity to visual orientation and mathematical education level predict a significant proportion of shared as well as unique variance in mathematics scores. These findings suggest that sensitivity to visual numerosity is not a unique visual psychophysical predictor of mathematical ability. Instead, the data are consistent with mathematics representing a multi-factorial process that shares resources with a number of visuospatial tasks.
Compared to unaffected observers patients with schizophrenia (SZ) show characteristic differences in visual perception, including a reduced susceptibility to the influence of context on judgments of contrast - a manifestation of weaker surround suppression (SS). To examine the generality of this phenomenon we measured the ability of 24 individuals with SZ to judge the luminance, contrast, orientation, and size of targets embedded in contextual surrounds that would typically influence the targets appearance. Individuals with SZ demonstrated weaker SS compared to matched controls for stimuli defined by contrast or size, but not for those defined by luminance or orientation. As perceived luminance is thought to be regulated at the earliest stages of visual processing our findings are consistent with a suppression deficit that is predominantly cortical in origin. In addition, we propose that preserved orientation SS in SZ may reflect the sparing of broadly tuned mechanisms of suppression. We attempt to reconcile these data with findings from previous studies.
Detection of visual contours (strings of small oriented elements) is markedly poor in schizophrenia. This has previously been attributed to an inability to group local information across space into a global percept. Here, we show that this failure actually originates from a combination of poor encoding of local orientation and abnormal processing of visual context.
There is considerable interest in how humans estimate the number of objects in a scene in the context of an extensive literature on how we estimate the density (i.e., spacing) of objects. Here, we show that our sense of number and our sense of density are intertwined. Presented with two patches, observers found it more difficult to spot differences in either density or numerosity when those patches were mismatched in overall size, and their errors were consistent with larger patches appearing both denser and more numerous. We propose that density is estimated using the relative response of mechanisms tuned to low and high spatial frequencies (SFs), because energy at high SFs is largely determined by the number of objects, whereas low SF energy depends more on the area occupied by elements. This measure is biased by overall stimulus size in the same way as human observers, and by estimating number using the same measure scaled by relative stimulus size, we can explain all of our results. This model is a simple, biologically plausible common metric for perceptual number and density.
In visual metacontrast masking, the visibility of a brief target stimulus can be reduced substantially if it is preceded (forward masking) or followed (backward masking) by a non-overlapping mask. These effects have been attributed to inhibitory processes within the visual system. Two previous studies have used metacontrast masking to assess inhibitory function in migraine and control groups, however, each used different types of masking and obtained different results.
The human visual system must perform complex visuospatial extrapolations (VSE) across space and time in order to extract shape and form from the retinal projection of a cluttered visual environment characterized by occluded surfaces and moving objects. Even if we exclude the temporal dimension, for instance when judging whether an extended finger is pointing towards one object or another, the mechanisms of VSE remain opaque. Here we investigated the neural correlates of VSE using functional magnetic resonance imaging in sixteen human observers while they judged the relative position of, or saccaded to, a (virtual) target defined by the extrapolated path of a pointer. Using whole brain and region of interest (ROI) analyses, we compared the brain activity evoked by these VSE tasks to similar control judgements or eye movements made to explicit (dot) targets that did not require extrapolation. The data show that activity in an occipitotemporal region that included the lateral occipital cortex (LOC) was significantly greater during VSE than during control tasks. A similar, though less pronounced, pattern was also evident in regions of the fronto-parietal cortex that included the frontal eye fields. However, none of the ROIs examined exhibited a significant interaction between target type (extrapolated/explicit) and response type (oculomotor/perceptual). These findings are consistent with a close association between visuoperceptual and oculomotor responses, and highlight a critical role for the LOC in the process of VSE.
Pointing movements made to a target defined by the imaginary intersection of a pointer with a distant landing line were examined in healthy human observers in order to determine whether such motor responses are susceptible to the Poggendorff effect. In this well-known geometric illusion observers make systematic extrapolation errors when the pointer abuts a second line (the inducer). The kinematics of extrapolation movements, in which no explicit target was present, where similar to those made in response to a rapid-onset (explicit) dot target. The results unambiguously demonstrate that motor (pointing) responses are susceptible to the illusion. In fact, raw motor biases were greater than for perceptual responses: in the absence of an inducer (and hence also the acute angle of the Poggendorff stimulus) perceptual responses were near-veridical, whilst motor responses retained a bias. Therefore, the full Poggendorff stimulus contained two biases: one mediated by the acute angle formed between the oblique pointer and the inducing line (the classic Poggendorff effect), which affected both motor and perceptual responses equally, and another bias, which was independent of the inducer and primarily affected motor responses. We conjecture that this additional motor bias is associated with an undershoot in the unknown direction of movement and provide evidence to justify this claim. In conclusion, both manual pointing and perceptual judgements are susceptible to the well-known Poggendorff effect, supporting the notion of a unitary representation of space for action and perception or else an early locus for the effect, prior to the divergence of processing streams.
While there is evidence for multiple spatial and attentional maps in the brain it is not clear to what extent visuoperceptual and oculomotor tasks rely on common neural representations and attentional mechanisms. Using a dual-task interference paradigm we tested the hypothesis that eye movements and perceptual judgments made to simultaneously presented visuospatial information compete for shared limited resources. Observers undertook judgments of stimulus collinearity (perceptual extrapolation) using a pointer and Gabor patch and/or performed saccades to a peripheral dot target while their eye movements were recorded. In addition, observers performed a non-spatial control task (contrast discrimination), matched for task difficulty and stimulus structure, which on the basis of previous studies was expected to represent a lesser load on putative shared resources. Greater mutual interference was indeed found between the saccade and extrapolation task pair than between the saccade and contrast discrimination task pair. These data are consistent with visuoperceptual and oculomotor responses competing for common limited resources as well as spatial tasks incurring a relatively high attentional cost.
There is a wealth of literature on the role of short-range interactions between low-level orientation-tuned filters in the perception of discontinuous contours. However, little is known about how spatial information is integrated across more distant regions of the visual field in the absence of explicit local orientation cues, a process referred to here as visuospatial interpolation (VSI). To examine the neural correlates of VSI high field functional magnetic resonance imaging was used to study brain activity while observers either judged the alignment of three Gabor patches by a process of interpolation or discriminated the local orientation of the individual patches. Relative to a fixation baseline the two tasks activated a largely over-lapping network of regions within the occipito-temporal, occipito-parietal and frontal cortices. Activated clusters specific to the orientation task (orientation>interpolation) included the caudal intraparietal sulcus, an area whose role in orientation encoding per se has been hotly disputed. Surprisingly, there were few task-specific activations associated with visuospatial interpolation (VSI>orientation) suggesting that largely common cortical loci were activated by the two experimental tasks. These data are consistent with previous studies that suggest higher level grouping processes -putatively involved in VSI- are automatically engaged when the spatial properties of a stimulus (e.g. size, orientation or relative position) are used to make a judgement.
While observers are adept at judging the density of elements (e.g., in a random-dot image), it has recently been proposed that they also have an independent visual sense of number. To test the independence of number and density discrimination, we examined the effects of manipulating stimulus structure (patch size, element size, contrast, and contrast-polarity) and available attentional resources on both judgments. Five observers made a series of two-alternative, forced-choice discriminations based on the relative numerosity/density of two simultaneously presented patches containing 16-1,024 Gaussian blobs. Mismatches of patch size and element size (across reference and test) led to bias and reduced sensitivity in both tasks, whereas manipulations of contrast and contrast-polarity had varied effects on observers, implying differing strategies. Nonetheless, the effects reported were consistent across density and number judgments, the only exception being when luminance cues were made available. Finally, density and number judgment were similarly impaired by attentional load in a dual-task experiment. These results are consistent with a common underlying metric to density and number judgments, with the caveat that additional cues may be exploited when they are available.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.