Since the pioneering study by Rosch and colleagues in the 70s, it is commonly agreed that basic level perceptual categories (dog, chair...) are accessed faster than superordinate ones (animal, furniture...). Nevertheless, the speed at which objects presented in natural images can be processed in a rapid go/no-go visual superordinate categorization task has challenged this "basic level advantage".
The ability of monkeys to categorize objects in visual stimuli such as natural scenes might rely on sets of low-level visual cues without any underlying conceptual abilities. Using a go/no-go rapid animal/non-animal categorization task with briefly flashed achromatic natural scenes, we show that both human and monkey performance is very robust to large variations of stimulus luminance and contrast. When mean luminance was increased or decreased by 25-50%, accuracy and speed impairments were small. The largest impairment was found at the highest luminance value with monkeys being mainly impaired in accuracy (drop of 6% correct vs. <1.5% in humans), whereas humans were mainly impaired in reaction time (20 ms increase in median reaction time vs. 4 ms in monkeys). Contrast reductions induced a large deterioration of image definition, but performance was again remarkably robust. Subjects scored well above chance level, even when the contrast was only 12% of the original photographs ( approximately 81% correct in monkeys; approximately 79% correct in humans). Accuracy decreased with contrast reduction but only reached chance level -in both species- for the most extreme condition, when only 3% of the original contrast remained. A progressive reaction time increase was observed that reached 72 ms in monkeys and 66 ms in humans. These results demonstrate the remarkable robustness of the primate visual system in processing objects in natural scenes with large random variations in luminance and contrast. They illustrate the similarity with which performance is impaired in monkeys and humans with such stimulus manipulations. They finally show that in an animal categorization task, the performance of both monkeys and humans is largely independent of cues relying on global luminance or the fine definition of stimuli.
Although we are beginning to understand how observed actions performed by conspecifics with a single hand are processed and how bimanual actions are controlled by the motor system, we know very little about the processing of observed bimanual actions. We used fMRI to compare the observation of bimanual manipulative actions with their unimanual components, relative to visual control conditions equalized for visual motion. Bimanual action observation did not activate any region specialized for processing visual signals related to this more elaborated action. On the contrary, observation of bimanual and unimanual actions activated similar occipito-temporal, parietal and premotor networks. However, whole-brain as well as region of interest (ROI) analyses revealed that this network functions differently under bimanual and unimanual conditions. Indeed, in bimanual conditions, activity in the network was overall more bilateral, especially in parietal cortex. In addition, ROI analyses indicated bilateral parietal activation patterns across hand conditions distinctly different from those at other levels of the action-observation network. These activation patterns suggest that while occipito-temporal and premotor levels are involved with processing the kinematics of the observed actions, the parietal cortex is more involved in the processing of static, postural aspects of the observed action. This study adds bimanual cooperation to the growing list of distinctions between parietal and premotor cortex regarding factors affecting visual processing of observed actions.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.