Purpose: Patients with macular disease often report experiencing metamorphopsia (visual distortion). Although typically measured with Amsler charts, more objective and quantitative assessments of perceived distortion are desirable to effectively monitor the presence, progression and remediation of visual impairment. Methods: Participants with binocular (n = 33) and monocular (n= 50) maculopathy across seven disease groups, and control participants (n = 10) with no identifiable retinal disease completed a modified Amsler Grid assessment (presented on a computer screen with eye tracking to ensure fixation compliance) and two novel objective measures of metamorphopsia in the central five degrees of visual field. 81% (67/83) of participants completed a task requiring them to configure eight dots in the shape of a square, and 64% (32/50) of participants experiencing monocular distortion completed a spatial alignment task using dichoptic stimuli. 10 controls completed all tasks. Results: Horizontal and vertical distortion magnitudes were calculated for each of the three assessments. Distortion magnitudes were significantly higher in patients than controls in all assessments. There was no significant difference in magnitude of distortion across different macular diseases. Among patients, there were no significant correlations between overall magnitude of distortion among any of the three measures and no significant correlations in localized measures of distortion. Conclusions: Three alternative quantifications of monocular spatial distortion in the central visual field generated uncorrelated estimates of visual distortion. It is therefore unlikely that metamorphopsia is caused solely by displacement of photoreceptors in the retina, but instead involves additional top-down information, knowledge about the scene, and perhaps, cortical reorganization.
The development of motion processing is a critical part of visual development, allowing children to interact with moving objects and navigate within a dynamic environment. However, global motion processing, which requires pooling motion information across space, develops late, reaching adult-like levels only by mid-to-late childhood. The reasons underlying this protracted development are not yet fully understood. In this study, we sought to determine whether the development of motion coherence sensitivity is limited by internal noise (i.e., imprecision in estimating the directions of individual elements) and/or global pooling across local estimates. To this end, we presented equivalent noise direction discrimination tasks and motion coherence tasks at both slow (1.5°/s) and fast (6°/s) speeds to children aged 5, 7, 9 and 11 years, and adults. We show that, as children get older, their levels of internal noise reduce, and they are able to average across more local motion estimates. Regression analyses indicated, however, that age-related improvements in coherent motion perception are driven solely by improvements in averaging and not by reductions in internal noise. Our results suggest that the development of coherent motion sensitivity is primarily limited by developmental changes within brain regions involved in integrating motion signals (e.g., MT/V5).
It has been suggested that numerosity is an elementary quality of perception, similar to colour. If so (and despite considerable investigation), its mechanism remains unknown. Here, we show that observers require on average a massive difference of approximately 40% to detect a change in the number of objects that vary irrelevantly in blur, contrast and spatial separation, and that some naive observers require even more than this. We suggest that relative numerosity is a type of texture discrimination and that a simple model computing the contrast energy at fine spatial scales in the image can perform at least as well as human observers. Like some human observers, this mechanism finds it harder to discriminate relative numerosity in two patterns with different degrees of blur, but it still outpaces the human. We propose energy discrimination as a benchmark model against which more complex models and new data can be tested.
People with migraine are relatively poor at judging the direction of motion of coherently moving signal dots when interspersed with noise dots drifting in random directions, a task known as motion coherence. Although this has been taken as evidence of impoverished global pooling of motion signals, it could also arise from unreliable coding of local direction (of each dot), or an inability to segment signal from noise (noise-exclusion). The aim of this study was to determine how these putative limits contribute to impoverished motion processing in migraine.
Test-retest variability (TRV) limits our ability to detect clinically significant changes in visual acuity (VA). We wanted to compare the effect of scoring and termination rules on TRV for logMAR charts, employing either conventional or pseudo high-pass (Vanishing Optotype) letters.
We examined how crowding (the breakdown of object recognition in the periphery caused by interference from "clutter") depends on the global arrangement of target and distracting flanker elements. Specifically we probed orientation discrimination using a near-vertical target Gabor flanked by two vertical distractor Gabors (one above and one below the target). By applying variable (opposite-sign) horizontal offsets to the positions of the two flankers we arranged the elements so that on some trials they formed contours with the target and on others they did not. While the presence of flankers generally elevated orientation discrimination thresholds for the target we observe maximal crowding not when flanker and targets were co-aligned but when a small spatial offset was applied to flanker location, so that contours formed between flanker and targets only when the target orientation was cued. We also report that observers orientation judgments are biased, with target orientation appearing either attracted or repulsed by the global/contour orientation. A second experiment reveals that the sign of this effect is dependent both on observer and on eccentricity. In general, the magnitude of repulsion is reduced with eccentricity but whether this becomes attraction (of element orientation to contour orientation) is dependent on observer. We note however that across observers and eccentricities, the magnitude of repulsion correlates positively with the amount of release from crowding observed with co-aligned targets and flankers, supporting the notion of fluctuating bias as the basis for elevated crowding within contours.
Sensitivity to visual numerosity has previously been shown to predict human mathematical performance. However, it is not clear whether it is discrimination of numerosity per se that is predictive of mathematical ability, or whether the association is driven by more general task demands. To test this notion we had over 300 participants (ranging in age from 6 to 73 years) perform a symbolic mathematics test and 4 different visuospatial matching tasks. The visual tasks involved matching 2 clusters of Gabor elements for their numerosity, density, size or orientation by a method of adjustment. Partial correlation and regression analyses showed that sensitivity to visual numerosity, sensitivity to visual orientation and mathematical education level predict a significant proportion of shared as well as unique variance in mathematics scores. These findings suggest that sensitivity to visual numerosity is not a unique visual psychophysical predictor of mathematical ability. Instead, the data are consistent with mathematics representing a multi-factorial process that shares resources with a number of visuospatial tasks.
Compared to unaffected observers patients with schizophrenia (SZ) show characteristic differences in visual perception, including a reduced susceptibility to the influence of context on judgments of contrast - a manifestation of weaker surround suppression (SS). To examine the generality of this phenomenon we measured the ability of 24 individuals with SZ to judge the luminance, contrast, orientation, and size of targets embedded in contextual surrounds that would typically influence the targets appearance. Individuals with SZ demonstrated weaker SS compared to matched controls for stimuli defined by contrast or size, but not for those defined by luminance or orientation. As perceived luminance is thought to be regulated at the earliest stages of visual processing our findings are consistent with a suppression deficit that is predominantly cortical in origin. In addition, we propose that preserved orientation SS in SZ may reflect the sparing of broadly tuned mechanisms of suppression. We attempt to reconcile these data with findings from previous studies.
Arctic reindeer experience extreme changes in environmental light from continuous summer daylight to continuous winter darkness. Here, we show that they may have a unique mechanism to cope with winter darkness by changing the wavelength reflection from their tapetum lucidum (TL). In summer, it is golden with most light reflected back directly through the retina, whereas in winter it is deep blue with less light reflected out of the eye. The blue reflection in winter is associated with significantly increased retinal sensitivity compared with summer animals. The wavelength of reflection depends on TL collagen spacing, with reduced spacing resulting in shorter wavelengths, which we confirmed in summer and winter animals. Winter animals have significantly increased intra-ocular pressure, probably produced by permanent pupil dilation blocking ocular drainage. This may explain the collagen compression. The resulting shift to a blue reflection may scatter light through photoreceptors rather than directly reflecting it, resulting in elevated retinal sensitivity via increased photon capture. This is, to our knowledge, the first description of a retinal structural adaptation to seasonal changes in environmental light. Increased sensitivity occurs at the cost of reduced acuity, but may be an important adaptation in reindeer to detect moving predators in the dark Arctic winter.
Detection of visual contours (strings of small oriented elements) is markedly poor in schizophrenia. This has previously been attributed to an inability to group local information across space into a global percept. Here, we show that this failure actually originates from a combination of poor encoding of local orientation and abnormal processing of visual context.
There is considerable interest in how humans estimate the number of objects in a scene in the context of an extensive literature on how we estimate the density (i.e., spacing) of objects. Here, we show that our sense of number and our sense of density are intertwined. Presented with two patches, observers found it more difficult to spot differences in either density or numerosity when those patches were mismatched in overall size, and their errors were consistent with larger patches appearing both denser and more numerous. We propose that density is estimated using the relative response of mechanisms tuned to low and high spatial frequencies (SFs), because energy at high SFs is largely determined by the number of objects, whereas low SF energy depends more on the area occupied by elements. This measure is biased by overall stimulus size in the same way as human observers, and by estimating number using the same measure scaled by relative stimulus size, we can explain all of our results. This model is a simple, biologically plausible common metric for perceptual number and density.
The goal of the Cognitive Neuroscience Test Reliability and Clinical Applications for Schizophrenia (CNTRACS) Consortium was to develop measures of discrete cognitive processes, allowing for the interpretation of specific deficits that could be linked to specific neural systems. Here we report on the intertask, clinical, and functional correlates of the 4 tasks that were investigated in large groups of patients with schizophrenia (>100) and healthy controls (>73) at 5 sites across the United States. In both healthy and patient groups, the key dependent measures from the CNTRACS tasks were minimally intercorrelated, suggesting that they are measuring discrete abilities. Correlations were examined between CNTRACS tasks and measures of functional capacity, premorbid IQ, symptom severity, and level of community functioning. Performance on tasks measuring relational memory encoding, goal maintenance, and visual gain control were correlated with premorbid IQ and the former 2 tasks with the functional capacity. Goal maintenance task performance was negatively correlated with negative symptom severity and informant reports of community function. These correlations reflect the relationship of specific abilities with functional outcome. They are somewhat lower than functional outcome correlations observed with conventional neuropsychological tests that confound multiple cognitive and motivational deficits. The measures of visual integration and gain control were not significantly correlated with clinical symptoms or function. These results suggest that the CNTRACS tasks measure discrete cognitive abilities, some of which relate to aspects of functional capacity/outcome in schizophrenia.
Vanishing Optotype letters have a pseudo high-pass design so that the mean luminance of the target is the same as the background and the letters thus vanish soon after the resolution threshold is reached. We wished to determine the variability of acuity measurements using these letters compared to conventional letters, and in particular how acuity is affected by the number of alternatives available to the subject.
Previously reported superior visual acuity (VA) in autism spectrum conditions (ASC) may have resulted from methodological settings used (Ashwin, Ashwin, Rhydderch, Howells, & Baron-Cohen, 2009). The current study re-tested whether participants with (N=20) and without (N=20) ASC differ on psychophysical measures of VA. Participants vision was corrected before acuity measurement, minimising refractive blur. VA was assessed with an ETDRS chart as well as the Freiburg Visual Acuity and Contrast Test (FrACT). FrACT testing was undertaken at 4m (avoiding limitations of pixel-size), using 36 trials (avoiding fatigue). Best corrected VA was significantly better than the initial habitual acuity in both groups, but adults with and without ASC did not differ on ETDRS or FrACT binocular VA. Future research should examine at which level of visual processing sensory differences emerge.
In the peripheral visual field, nearby objects can make one another difficult to recognize (crowding) in a manner that critically depends on their separation. We manipulated the apparent separation of objects using the illusory shifts in perceived location that arise from local motion to determine if crowding depends on physical or perceived location. Flickering Gabor targets displayed between either flickering or drifting flankers were used to (a) quantify the perceived target-flanker separation and (b) measure discrimination of the target orientation or spatial frequency as a function of physical target-flanker separation. Relative to performance with flickering targets, we find that flankers drifting away from the target improve discrimination, while those drifting toward the target degrade it. When plotted as a function of perceived separation across conditions, the data collapse onto a single function indicating that it is perceived and not physical location that determines the magnitude of visual crowding. There was no measurable spatial distortion of the target that could explain the effects. This suggests that crowding operates predominantly in extrastriate visual cortex and not in early visual areas where the response of neurons is retinotopically aligned with the physical position of a stimulus.
The response of motion-selective neurons in primary visual cortex is ambiguous with respect to the two-dimensional (2D) velocity of spatially extensive objects. To investigate how local neural activity is integrated in the computation of global motion, we asked observers to judge the direction of a rigidly translating natural scene viewed through 16 apertures. We report a novel relative oblique effect: local contour orientations parallel or orthogonal to the direction of motion yield more precise and less biased estimates of direction than other orientations. This effect varies inversely with the local orientation variance of the natural scenes. Analysis of contour orientations across aperture pairings extends previous research on plaids and indicates that observers are biased toward the faster moving contour for Type I pairings. Finally, we show that observers bias and precision as a function of the orientation statistics of natural scenes can be accounted for by an interaction between naturally arising anisotropies in natural scenes and a template model of MT that is optimally tuned for isotropic stimuli.
Although visual systems are optimized to deal with the natural visual environment, our understanding of human motion perception is in large part based on the use of artificial stimuli. Here, we assessed observers ability to estimate the direction of translating natural images and fractals by having them adjust the orientation of a subsequently viewed line. A system of interleaved staircases, driven by observers direction estimates, ensured that stimuli were presented near one of 16 reference directions. The resulting error distributions (i.e., the differences between reported and true directions) reveal several anisotropies in global motion processing. First, observers estimates are biased away from cardinal directions (reference repulsion). Second, the standard deviations of estimates show an "oblique effect" being ?45% lower around cardinal directions. Third, errors around cardinal directions are more likely (?22%) to approach zero than would be consistent with Gaussian-distributed errors, suggesting that motion processing minimizes the number as well as magnitude of errors. Fourth, errors are similar for natural scenes and fractals, indicating that observers do not use top-down information to improve performance. Finally, adaptation to unidirectional motion modifies observers bias by amplifying existing repulsion (e.g., around cardinal directions). This bias change can improve direction discrimination but is not due to a reduction in variability.
We investigated how crowding-a breakdown in object recognition that occurs in the presence of nearby distracting clutter-works for complex letter-like stimuli. Subjects reported the orientation (up/down/left/right) of a T target, abutted by a single flanker composed of randomly positioned horizontal and vertical bars. In addition to familiar retinotopic anisotropies (e.g., more crowding from more eccentric flankers), we report three object-centered anisotropies. First, inversions of the target element were rare: errors included twice as many ±90° as 180° target rotations. Second, flankers were twice as intrusive when they lay above or below (end-flanking) compared to left or right (side-flanking) of an upright T target (an effect that holds under global rotation of the target-flanker pair). Third, end flankers induce subjects to make erroneous reports that resemble the flanker (producing a structured pattern of errors), but errors induced by side flankers do not (instead producing random errors). A model based on probabilistic weighted averaging of the feature positions within contours can account for these effects. Thus, we demonstrate a set of seemingly "high-level" object-centered crowding effects that can arise from "low-level" interactions between the features of letter-like elements.
Crowding is the breakdown in object recognition that occurs in cluttered visual environments and the fundamental limit on peripheral vision, affecting identification within many visual modalities and across large spatial regions. Though frequently characterized as a disruptive process through which object representations are suppressed or lost altogether, we demonstrate that crowding systematically changes the appearance of objects. In particular, target patches of visual noise that are surrounded ("crowded") by oriented Gabor flankers become perceptually oriented, matching the flankers. This was established with a change-detection paradigm: under crowded conditions, target changes from noise to Gabor went unnoticed when the Gabor orientation matched the flankers (and the illusory target percept), despite being easily detected when they differed. Rotation of the flankers (leaving target noise unaltered) also induced illusory target rotations. Blank targets led to similar results, demonstrating that crowding can induce apparent structure where none exists. Finally, adaptation to these stimuli induced a tilt aftereffect at the target location, consistent with signals from the flankers "spreading" across space. These results confirm predictions from change-based models of crowding, such as averaging, and establish crowding as a regularization process that simplifies the peripheral field by promoting consistent appearance among adjacent objects.
Visual crowding is a breakdown in object identification that occurs in cluttered scenes, a process that represents the principle restriction on visual performance in the periphery. When crowded objects are presented experimentally, a key finding is that observers frequently report nearby flanking items instead of the target. This observation has led to the proposal that crowding reflects increased noise in the positional code for objects; although how the presence of nearby objects might disrupt positional encoding remains unclear. We quantified this disruption using cross-like stimuli, where observers judged whether the horizontal target line was positioned above or below the stimulus midpoint. Overall, observers were poorer at judging position in the presence of crowding flankers. However, offsetting horizontal lines in the flankers also led observers to report that the horizontal line in the target was shifted in the same direction, an effect that held for subthreshold flanker offsets. In short, crowding induced both random and systematic errors in observers judgment of position, with or without the detection of flanker structure. Computational modeling reveals that perceived position in the presence of flankers follows a weighted average of noisy target- and flanker-line positions, rather than a substitution of flanker-features into the target, as has been proposed previously. Together, our results suggest that crowding is a preattentive process that uses averaging to regularize the noisy representation of position in the periphery.
The contrast sensitivity function is routinely measured in the laboratory with sine-wave gratings presented on homogenous gray backgrounds; natural images are instead composed of a broad range of spatial and temporal structures. In order to extend channel-based models of visual processing to more natural conditions, we examined how contrast sensitivity varies with the context in which it is measured. We report that contrast sensitivity is quite different under laboratory than natural viewing conditions: adaptation or masking with natural scenes attenuates contrast sensitivity at low spatial and temporal frequencies. Expressed another way, viewing stimuli presented on homogenous screens overcomes chronic adaptation to the natural environment and causes a sharp, unnatural increase in sensitivity to low spatial and temporal frequencies. Consequently, the standard contrast sensitivity function is a poor indicator of sensitivity to structure in natural scenes. The magnitude of masking by natural scenes is relatively independent of local contrast but depends strongly on the density of edges even though neither greatly affects the local amplitude spectrum. These results suggest that sensitivity to spatial structure in natural scenes depends on the distribution of local edges as well as the local amplitude spectrum.
A moving object elicits responses from V1 neurons tuned to a broad range of locations, directions, and spatiotemporal frequencies. Global pooling of such signals can overcome their intrinsic ambiguity in relation to the objects direction/speed (the "aperture problem"); here we examine the role of low-spatial frequencies (SF) and second-order statistics in this process. Subjects made a 2AFC fine direction-discrimination judgement of naturally contoured stimuli viewed rigidly translating behind a series of small circular apertures. This configuration allowed us to manipulate the scene by randomly switching which portion of the stimulus was presented behind each aperture or by occluding certain spatial frequency bands. We report that global motion integration is (a) largely insensitive to the second-order statistics of such stimuli and (b) is rigidly broadband even in the presence of a disrupted low SF component.
It has been proposed that visual crowding-the breakdown in recognition that occurs when objects are presented in cluttered scenes-reflects a limit imposed by visual attention. We examined this idea in the context of an orientation averaging task, having subjects judge the mean orientation of a set of oriented signal elements either in isolation, or "crowded" by nearby randomly oriented elements. In some conditions, subjects also had to perform an attentionally demanding secondary task. By measuring performance at different levels of signal orientation variability, we show that crowding increases subjects local uncertainty (about the orientation of individual elements) but that diverting attention reduces their global efficiency (the effective number of elements they can average over). Furthermore, performance with the same stimulus-sequence, presented multiple times, reveals that crowding does not induce more stimulus-independent variability (as would be predicted by some accounts based on attention). We conclude that crowding and attentional load have dissociable perceptual consequences for orientation averaging, suggesting distinct neural mechanisms for both. For the task we examined, attention can modulate the effects of crowding by changing the efficiency with which information is analyzed by the visual system but since crowding changes local uncertainty, not efficiency, crowding does not reflect an attentional limit.
Much research over the last decade has examined how the brain links local activity within primary visual cortex to signal the presence of extended global structure. Here we bring together two themes within this area by addressing how the immediate context that features arise in influences how they are integrated into contours. Specifically, observers were required to detect and discriminate the shape of contours that were surrounded by elements with a fixed orientation offset compared to contour elements. By comparing performance with contours made of elements oriented either near parallel ("snakes") or near perpendicular ("ladders") to the contour orientation, we were able to isolate the effect of orientation contrast on observers ability to perform our task with near-collinear contour structure. We report both substantial facilitation of contour integration in the presence of near-perpendicular surrounds and inhibition in the presence of near-parallel surrounds. These results are consistent with known orientation dependence of suppressive surround interactions in the primary visual cortex and suggest that the "rules of association" for contour integration must incorporate the influence of local orientation context. Specifically we show that our results are consistent with contour integration relying on an opponent-orientation energy response from a bank of first-stage oriented filters.
Vision research has made very substantial progress towards understanding how we see. It is one area of psychology where the three-way thrust of behavioural measurements (psychophysics), brain imaging, and computational studies have been combined quite routinely for some years. The purpose of this paper is to demonstrate a relatively unusual form of computational modelling that we characterise as involving image descriptions. Image descriptions are statements about structures in images and relationships between structures. Most modelling in vision is either conceived in fairly abstract terms, or is done at the level of images. Neither is entirely satisfactory, and image descriptions are a simple formulation of age-old ideas about a Vocabulary of image features that are detected and parameterized from actual digital images. For our example, we use the domain of the visual perception of printed text. This is an area that has been characterized by thorough, robust psychophysical experiments. The fundamental requirements of visual processing in this domain are: grouping of some parts if the image into words; at the same time segmenting words from each other. We show how these are readily understood in terms of our model of image descriptions, and show quantitatively that typographical practice, refined over centuries, is about optimum for the visual system at least as represented by our model. In addition, we show that the same notion of image descriptions could, in principle, support word recognition in certain circumstances.
The human visual system has a remarkable ability to accurately estimate the relative brightness of adjacent objects despite large variations in illumination. However, the lightness of two identical equiluminant gray regions can appear quite different when a light-dark luminance transition falls between them. This illusory brightness "filling-in" phenomenon, the Craik-Cornsweet-OBrien (CCOB) illusion, exposes fundamental assumptions made by the visual system in estimating lightness, but its neural basis remains unclear. While the responses of high-level visual cortex can be correlated with perception of the CCOB, simple computational models suggest that the effect may originate from a much lower level, possibly subcortical. Here, we used high spatial resolution functional magnetic resonance imaging to show that the CCOB illusion is strongly correlated with signals recorded from the human lateral geniculate nucleus. Moreover, presenting the light and dark luminance transitions that induce the CCOB effect separately to each eye abolishes the illusion, suggesting that it depends on eye-specific signals. Our observations suggest that the CCOB effect arises from signals in populations of monocular neurons very early in the human geniculostriate visual pathway.
Albino mammals exhibit a range of visual deficits including disrupted hemispheric pathways, an underdeveloped central retina, and nystagmus. Recently, it has been reported that albino animals also show deficits in the processing of visual motion, exhibiting higher motion coherence thresholds (MCTs; the proportion of coherently moving elements within a field of randomly moving distracters required to reliably report direction). Here we compare MCTs-collected from human observers with albinism-with an equivalent noise analysis of their fine-direction discrimination and report that their loss in motion sensitivity operates at both the level of local motion processing (of small objects) and at the later stage of global motion pooling. We also compare results from observers with aniridia (characterized by underdeveloped central retina and nystagmus but normal hemispheric visual pathways) and a rare group of observers with albinism who show no nystagmus. For the observers tested, nystagmus proved to be a common feature of individuals showing elevated MCTs. Since it is likely that motion perception is influenced by environmental factors early in development we postulate that the effect of congenital nystagmus on the temporal structure of the natural visual diet disrupts the ability of motion pathways to form normally.
It has been proposed that faces are represented in the visual brain as points within a multi-dimensional "face space", with the average at its origin. We adapted a psychophysical procedure that measures non-linearities in contrast transduction (by measuring discrimination around different reference/pedestal levels of contrast) to examine the encoding of facial-identity within such a notional space. Specifically we had subjects perform identity discrimination at various pedestal levels of identity (varying from average/0% to caricature/125% identity) to derive "identity dipper functions". Results indicate that subjects are generally best at spotting identity change in neither average nor full-identity faces, but rather in faces containing an intermediate level of identity (which varies from face-to-face). The overall pattern of results is consistent with the neural encoding of faces involving a single modest non-linear transformation of identity that is consistent across faces and subjects, but that it scaled according to the distinctiveness of the face.
The structure of the human face allows it to signal a wide range of useful information about a persons gender, identity, mood, etc. We show empirically that facial identity information is conveyed largely via mechanisms tuned to horizontal visual structure. Specifically observers perform substantially better at identifying faces that have been filtered to contain just horizontal information compared to any other orientation band. We then show, computationally, that horizontal structures within faces have an unusual tendency to fall into vertically co-aligned clusters compared with images of natural scenes. We call these clusters "bar codes" and propose that they have important computational properties. We propose that it is this property makes faces "special" visual stimuli because they are able to transmit information as reliable spatial sequence: a highly constrained one-dimensional code. We show that such structure affords computational advantages for face detection and decoding, including robustness to normal environmental image degradation, but makes faces vulnerable to certain classes of transformation that change the sequence of bars such as spatial inversion or contrast-polarity reversal.
Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4°?×?4° target, pseudo-randomly selected within a 26°?×?11° natural image. Eye positions were recorded at 50?Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p?>?0.1). A ?(2) test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p?0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search.
During development, the presence of strabismus and anisometropia frequently leads to amblyopia, a visual disorder characterized by interocular acuity differences. Although additional deficits in contrast sensitivity, crowding (the impaired recognition of closely spaced objects), and stereoacuity are common, the relationship between these abilities is unclear.
Vanishing optotypes (VOs) are pseudo high-pass letters whose mean luminance matches the background so that they "vanish" when the recognition acuity threshold is reached in the fovea. We determined the effect of increasing blur on acuity for these optotypes and conventional letters, in both foveal and extrafoveal viewing.
AMD results in loss of central vision and a dependence on low-resolution peripheral vision. While many image enhancement techniques have been proposed, there is a lack of quantitative comparison of the effectiveness of enhancement. We developed a natural visual search task that uses patients eye movements as a quantitative and functional measure of the efficacy of image modification.
Dakin and Baruch (2009) investigated how context influences contour integration, specifically reporting that near-perpendicular surrounding-elements reduced the exposure-duration observers required to localize and determine the shape of contours (compared to performance with randomly oriented surrounds) while near-parallel surrounds increased this time. Here, we ask if this effect might be a manifestation of visual crowding (the disruptive influence of "visual clutter" on object recognition). We first report that the effect generalizes to simple contour-localization (without explicit shape-discrimination) and influences tolerance to orientation jitter in the same way it affects threshold exposure-duration. We next directly examined the role of crowding by quantifying observers local uncertainty (about the orientation of the elements that comprised our contours), showing that this largely accounts for the effects of context on global contour integration. These findings support the idea that context influences contour integration at a predominantly local stage of processing and that the local effects of crowding eventually influence downstream stages in the cortical processing of visual form.
While observers are adept at judging the density of elements (e.g., in a random-dot image), it has recently been proposed that they also have an independent visual sense of number. To test the independence of number and density discrimination, we examined the effects of manipulating stimulus structure (patch size, element size, contrast, and contrast-polarity) and available attentional resources on both judgments. Five observers made a series of two-alternative, forced-choice discriminations based on the relative numerosity/density of two simultaneously presented patches containing 16-1,024 Gaussian blobs. Mismatches of patch size and element size (across reference and test) led to bias and reduced sensitivity in both tasks, whereas manipulations of contrast and contrast-polarity had varied effects on observers, implying differing strategies. Nonetheless, the effects reported were consistent across density and number judgments, the only exception being when luminance cues were made available. Finally, density and number judgment were similarly impaired by attentional load in a dual-task experiment. These results are consistent with a common underlying metric to density and number judgments, with the caveat that additional cues may be exploited when they are available.
Object recognition in the peripheral visual field is limited by crowding: the disruptive influence of nearby clutter. Despite its severity, little is known about the cortical locus of crowding. Here, we examined the neural correlates of crowding by combining event-related fMRI adaptation with a change-detection paradigm. Crowding can change the appearance of objects, such that items become perceptually matched to surrounding objects; we used this change in appearance as a signature of crowding and measured brain activity that correlated with the crowded percept. Observers adapted to a peripheral patch of noise surrounded by four Gabor flankers. When crowded, the noise appears oriented and perceptually indistinguishable from the flankers. Consequently, substitution of the noise for a Gabor identical to the flankers ("change-same") is rarely detected, whereas substitution for an orthogonal Gabor ("change-different") is rarely missed. We predicted that brain areas representing the crowded percept would show repetition suppression in change-same trials but release from adaptation in change-different trials. This predicted pattern was observed throughout cortical visual areas V1-V4, increasing in strength from early to late visual areas. These results depict crowding as a multistage process, involving even the earliest cortical visual areas, with perceptual consequences that are increasingly influenced by later visual areas.
Crowding--the deleterious influence of clutter on object recognition--disrupts the identification of visual features as diverse as orientation, motion, and color. It is unclear whether this occurs via independent feature-specific crowding processes (preceding the feature binding process) or via a singular (late) mechanism tuned for combined features. To examine the relationship between feature binding and crowding, we measured interactions between the crowding of relative position and orientation. Stimuli were a target cross and two flanker crosses (each composed of two near-orthogonal lines), 15 degrees in the periphery. Observers judged either the orientation (clockwise/counterclockwise) of the near-horizontal target line, its position (up/down relative to the stimulus center), or both. For single-feature judgments, crowding affected position and orientation similarly: thresholds were elevated and responses biased in a manner suggesting that the target appeared more like the flankers. These effects were tuned for orientation, with near-orthogonal elements producing little crowding. This tuning allowed us to separate the predictions of independent (feature specific) and combined (singular) models: for an independent model, reduced crowding for one feature has no effect on crowding for other features, whereas a combined process affects either all features or none. When observers made conjoint judgments, a reduction of orientation crowding (by increasing target-flanker orientation differences) increased the rate of correct responses for both position and orientation, as predicted by our combined model. In contrast, our independent model incorrectly predicted a high rate of position errors, since the probability of positional crowding would be unaffected by changes in orientation. Thus, at least for these features, crowding is a singular process that affects bound position and orientation values in an all-or-none fashion.
Previous studies of peripheral vision have shown that detection acuity is superior to resolution acuity for gratings over a range of contrasts, which is attributed to different limiting mechanisms (contrast insufficiency and neural undersampling) for the two tasks. To extend the analysis to letters in a way that avoided luminance cues, we used "vanishing optotype" characters, conveying second-order information, and constructed from tripole strokes having the same mean luminance as the surround. We measured the minimum letter size for detection and identification tasks for two different pairs of vanishing optotype characters (O vs. + and orthogonally oriented Landolt-Cs) as a function of contrast in central and peripheral vision. Foveally there was no significant difference between detection acuity and resolution acuity for either pair of letters over a range of stimulus contrasts from 20% to 100%, indicating performance is contrast-limited for both tasks. The same result was obtained at 30° eccentricity in the peripheral field for the O vs. + letters, again indicating performance is contrast-limited for both tasks. However, resolution acuity for the Landolt-C letters was significantly worse than detection acuity in the periphery over the same range of contrasts, which suggests performance is limited by neural undersampling for these letters. All of our experimental results are explained by a model of neural sampling in which detection acuity is determined by the size of neural receptive fields relative to the dimensions of the tripole responsible for spatial contrast, whereas resolution acuity is determined by the spacing of receptive fields relative to the spacing between strokes responsible for letter form.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.