Slow-adapting type I (SA-I) afferents deliver sensory signals to the somatosensory cortex during low-frequency (or static) mechanical stimulation. It has been reported that the somatosensory projection from SA-I afferents is effective and reliable for object grasping and manipulation. Despite a large number of neuroimaging studies on cortical activation responding to tactile stimuli mediated by SA-I afferents, how sensory information of such tactile stimuli flows over the somatosensory cortex remains poorly understood. In this study, we investigated tactile information processing of pressure stimuli between the primary (SI) and secondary (SII) somatosensory cortices by measuring effective connectivity using dynamic causal modeling (DCM). We applied pressure stimuli for 3 s to the right index fingertip of healthy participants and acquired functional magnetic resonance imaging (fMRI) data using a 3T MRI system.
Acne vulgaris is a common inflammatory disease that manifests on the face and affects appearance. In general, facial acne has a wide-ranging negative impact on the psychosocial functioning of acne sufferers and leaves physical and emotional scars. In the present study, we investigated whether patients with acne vulgaris demonstrate enhanced psychological bias when assessing the attractiveness of faces with acne symptoms and whether they devote greater selective attention to acne lesions than to acne-free (control) individuals. Participants viewed images of faces under two different skin (acne vs. acne-free) and emotional facial expression (happy and neutral) conditions. They rated the attractiveness of the faces, and the time spent fixating on the acne lesions was recorded with an eye tracker. We found that the gap in perceived attractiveness between acne and acne-free faces was greater for acne sufferers. Furthermore, patients with acne fixated longer on facial regions exhibiting acne lesions than did control participants irrespective of the facial expression depicted. In summary, patients with acne have a stronger attentional bias for acne lesions and focus more on the skin lesions than do those without acne. Clinicians treating the skin problems of patients with acne should consider these psychological and emotional scars.
The idea that faces are represented within a structured face space (Valentine Quarterly Journal of Experimental Psychology 43: 161-204, 1991) has gained considerable experimental support, from both physiological and perceptual studies. Recent work has also shown that faces can even be recognized haptically-that is, from touch alone. Although some evidence favors congruent processing strategies in the visual and haptic processing of faces, the question of how similar the two modalities are in terms of face processing remains open. Here, this question was addressed by asking whether there is evidence for a haptic face space, and if so, how it compares to visual face space. For this, a physical face space was created, consisting of six laser-scanned individual faces, their morphed average, 50%-morphs between two individual faces, as well as 50%-morphs of the individual faces with the average, resulting in a set of 19 faces. Participants then rated either the visual or haptic pairwise similarity of the tangible 3-D face shapes. Multidimensional scaling analyses showed that both modalities extracted perceptual spaces that conformed to critical predictions of the face space framework, hence providing support for similar processing of complex face shapes in haptics and vision. Despite the overall similarities, however, systematic differences also emerged between the visual and haptic data. These differences are discussed in the context of face processing and complex-shape processing in vision and haptics.
Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we tested all participants on the Cambridge Face Memory Test in which they had to recognize Caucasian target faces in a 3-alternative-forced-choice task. German controls performed better than Koreans who performed better than prosopagnosics. In the second experiment, participants rated the similarity of Caucasian faces that differed parametrically in either features or second-order relations (configuration). Prosopagnosics were less sensitive to configuration changes than both other groups. In addition, while all groups were more sensitive to changes in features than in configuration, this difference was smaller in Koreans. In the third experiment, participants had to learn exemplars of artificial objects, natural objects, and faces and recognize them among distractors of the same category. Here prosopagnosics performed worse than participants in the other two groups only when they were tested on face stimuli. In sum, Koreans and prosopagnosic participants differed from German controls in different ways in all tests. This suggests that German congenital prosopagnosics perceive Caucasian faces differently than do Korean participants. Importantly, our results suggest that different processing impairments underlie the ORE and CP.
Acupuncture stimulation increases local blood flow around the site of stimulation and induces signal changes in brain regions related to the body matrix. The rubber hand illusion (RHI) is an experimental paradigm that manipulates important aspects of bodily self-awareness. The present study aimed to investigate how modifications of body ownership using the RHI affect local blood flow and cerebral responses during acupuncture needle stimulation. During the RHI, acupuncture needle stimulation was applied to the real left hand while measuring blood microcirculation with a LASER Doppler imager (Experiment 1, N?=?28) and concurrent brain signal changes using functional magnetic resonance imaging (fMRI; Experiment 2, N?=?17). When the body ownership of participants was altered by the RHI, acupuncture stimulation resulted in a significantly lower increase in local blood flow (Experiment 1), and significantly less brain activation was detected in the right insula (Experiment 2). This study found changes in both local blood flow and brain responses during acupuncture needle stimulation following modification of body ownership. These findings suggest that physiological responses during acupuncture stimulation can be influenced by the modification of body ownership.
Natural image statistics is an important area of research in cognitive sciences and computer vision. Visualization of statistical results can help identify clusters and anomalies as well as analyze deviation, distribution, and correlation. Furthermore, they can provide visual abstractions and symbolism for categorized data. In this paper, we begin our study of visualization of image statistics by considering visual representations of power spectra, which are commonly used to visualize different categories of images. We show that they convey a limited amount of statistical information about image categories and their support for analytical tasks is ineffective. We then introduce several new visual representations, which convey different or more information about image statistics. We apply ANOVA to the image statistics to help select statistically more meaningful measurements in our design process. A task-based user evaluation was carried out to compare the new visual representations with the conventional power spectra plots. Based on the results of the evaluation, we made further improvement of visualizations by introducing composite visual representations of image statistics.
Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.
Humans are experts for face processing--this expertise develops over the course of several years, given visual input about faces from infancy. Recent studies have shown that individuals can also recognize faces haptically, albeit at lower performance than visually. Given that blind individuals are extensively trained on haptic processing, one may expect them to perform better at recognizing faces from touch than sighted individuals. Here, we tested this hypothesis using matched groups of sighted, congenitally blind, and acquired-blind individuals. Surprisingly, we found little evidence for a performance benefit for blind participants compared with sighted controls. Moreover, the congenitally blind group performed significantly worse than both the sighted and the acquired-blind group. Our results are consistent with the hypothesis that visual expertise may be necessary for haptic face recognition; hence, even extensive haptic training cannot easily account for deficits in visual processing.
Acupuncture is a therapeutic treatment that is defined as the insertion of needles into the body at specific points (ie, acupoints). Advances in functional neuroimaging have made it possible to study brain responses to acupuncture; however, previous studies have mainly concentrated on acupoint specificity. We wanted to focus on the functional brain responses that occur because of needle insertion into the body. An activation likelihood estimation meta-analysis was carried out to investigate common characteristics of brain responses to acupuncture needle stimulation compared to tactile stimulation. A total of 28 functional magnetic resonance imaging studies, which consisted of 51 acupuncture and 10 tactile stimulation experiments, were selected for the meta-analysis. Following acupuncture needle stimulation, activation in the sensorimotor cortical network, including the insula, thalamus, anterior cingulate cortex, and primary and secondary somatosensory cortices, and deactivation in the limbic-paralimbic neocortical network, including the medial prefrontal cortex, caudate, amygdala, posterior cingulate cortex, and parahippocampus, were detected and assessed. Following control tactile stimulation, weaker patterns of brain responses were detected in areas similar to those stated above. The activation and deactivation patterns following acupuncture stimulation suggest that the hemodynamic responses in the brain simultaneously reflect the sensory, cognitive, and affective dimensions of pain.
The facial feedback hypothesis suggests that feedback from cutaneous and muscular afferents influences our emotions during the control of facial expressions. Enhancing facial expressiveness produces an increase in autonomic arousal and self-reported emotional experience, whereas limiting facial expression attenuates these responses. The present study investigated differences in autonomic responses during imitated versus observed facial expressions. Thus, we obtained the facial electromyogram (EMG) of the corrugator muscle, and measured the skin conductance response (SCR) and pupil size (PS) of participants while they were either imitating or simply observing emotional expressions of anger. We found that participants produced significantly greater responses across all three measures (EMG, SCR, and PS) during active imitation than during passive observation. These results show that amplified feedback from facial muscles during imitation strengthens sympathetic activation in response to negative emotional cues. Our findings suggest that manipulations of muscular feedback could be used to modulate the bodily expression of emotion, including autonomic responses to the emotional cues.
Background. The rubber hand illusion (RHI) is an experimental paradigm that manipulates important aspects of body self-awareness. Objectives. We were interested in whether modifying bodily self-awareness by manipulation of body ownership and visual expectations using the RHI would change the subjective perception of pain as well as the autonomic response to acupuncture needle stimulation. Methods. Acupuncture needle stimulation was applied to the real hand during the RHI with (experiment 1) or without (experiment 2) visual expectation while measuring concurrent autonomic changes such as the skin conductance response (SCR). Subjective responses such as perception of the RHI and perceived pain were measured by questionnaires. Results. In experiment 1, the amplitude of the increase in SCR was visibly higher during the synchronous session compared with that of the asynchronous session. In experiment 2, the amplitude of the increase of SCR was lower for the synchronous session compared with that for the asynchronous session. Comparing these two experiments, the visual expectation of needle stimulation produced a greater autonomic response to acupuncture stimulation. Conclusions. Our findings suggest that the sympathetic response to acupuncture needle stimulation is primarily influenced by visual expectation rather than by modifications of body ownership.
We present three experiments on horizon estimation. In Experiment 1 we verify the human ability to estimate the horizon in static images from only visual input. Estimates are given without time constraints with emphasis on precision. The resulting estimates are used as baseline to evaluate horizon estimates from early visual processes. Stimuli are presented for only [Formula: see text] ms and then masked to purge visual short-term memory and enforcing estimates to rely on early processes, only. The high agreement between estimates and the lack of a training effect shows that enough information about viewpoint is extracted in the first few hundred milliseconds to make accurate horizon estimation possible. In Experiment 3 we investigate several strategies to estimate the horizon in the computer and compare human with machine "behavior" for different image manipulations and image scene types.
Although the hands are the most important tool for humans to manipulate objects, only little is known about haptic processing of natural objects. Here, we selected a unique set of natural objects, namely seashells, which vary along a variety of object features, while others are shared across all stimuli. To correctly interact with objects, they have to be identified or categorized. For both processes, measuring similarities between objects is crucial. Our goal is to better understand the haptic similarity percept by comparing it to the visual similarity percept. First, direct similarity measures were analyzed using multidimensional scaling techniques to visualize the perceptual spaces of both modalities. We find that the visual and the haptic modality form almost identical perceptual spaces. Next, we performed three different categorization tasks. All tasks exhibit a highly accurate processing of complex shapes of the haptic modality. Moreover, we find that objects grouped into the same category form regions within the perceptual space. Hence, in both modalities, perceived similarity constitutes the basis for categorizing objects. Moreover, both modalities focus on shape to form categories. Taken together, our results lead to the assumption that the same cognitive processes link haptic and visual similarity perception and the resulting categorization behavior.
Recognition and individuation of conspecifics by their face is essential for primate social cognition. This ability is driven by a mechanism that integrates the appearance of facial features with subtle variations in their configuration (i.e., second-order relational properties) into a holistic representation. So far, there is little evidence of whether our evolutionary ancestors show sensitivity to featural spatial relations and hence holistic processing of faces as shown in humans. Here, we directly compared macaques with humans in their sensitivity to configurally altered faces in upright and inverted orientations using a habituation paradigm and eye tracking technologies. In addition, we tested for differences in processing of conspecific faces (human faces for humans, macaque faces for macaques) and non-conspecific faces, addressing aspects of perceptual expertise. In both species, we found sensitivity to second-order relational properties for conspecific (expert) faces, when presented in upright, not in inverted, orientation. This shows that macaques possess the requirements for holistic processing, and thus show similar face processing to that of humans.
Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.
In this study, we show that humans form highly similar perceptual spaces when they explore complex objects from a parametrically defined object space in the visual and haptic domains. For this, a three-dimensional parameter space of well-defined, shell-like objects was generated. Participants either explored two-dimensional pictures or three-dimensional, interactive virtual models of these objects visually, or they explored three-dimensional plastic models haptically. In all cases, the task was to rate the similarity between two objects. Using these similarity ratings and multidimensional scaling (MDS) analyses, the perceptual spaces of the different modalities were then analyzed. Looking at planar configurations within this three-dimensional object space, we found that active visual exploration led to a highly similar perceptual space compared to passive exploration, showing that participants were able to reconstruct the complex parameter space already from two-dimensional pictures alone. Furthermore, we found that visual and haptic perceptual spaces had virtually identical topology compared to that of the physical stimulus space. Surprisingly, the haptic modality even slightly exceeded the visual modality in recovering the topology of the complex object space when the whole three-dimensional space was explored. Our findings point to a close connection between visual and haptic object representations and demonstrate the great degree of fidelity with which haptic shape processing occurs.
Primates possess the remarkable ability to differentiate faces of group members and to extract relevant information about the individual directly from the face. Recognition of conspecific faces is achieved by means of holistic processing, i.e. the processing of the face as an unparsed, perceptual whole, rather than as the collection of independent features (part-based processing). The most striking example of holistic processing is the Thatcher illusion. Local changes in facial features are hardly noticeable when the whole face is inverted (rotated 180 degrees ), but strikingly grotesque when the face is upright. This effect can be explained by a lack of processing capabilities for locally rotated facial features when the face is turned upside down. Recently, a Thatcher illusion was described in the macaque monkey analogous to that known from human investigations. Using a habituation paradigm combined with eye tracking, we address the critical follow-up questions raised in the aforementioned study to show the Thatcher illusion as a function of the observers species (humans and macaques), the stimulus species (humans and macaques) and the level of perceptual expertise (novice, expert).
The aim of this study was to separately analyze the role of featural and configural face representations. Stimuli containing only featural information were created by cutting the faces into their parts and scrambling them. Stimuli only containing configural information were created by blurring the faces. Employing an old-new recognition task, the aim of Experiments 1 and 2 was to investigate whether unfamiliar faces (Exp. 1) or familiar faces (Exp. 2) can be recognized if only featural or configural information is provided. Both scrambled and blurred faces could be recognized above chance level. A further aim of Experiments 1 and 2 was to investigate whether our method of creating configural and featural stimuli is valid. Pre-activation of one form of representation did not facilitate recognition of the other, neither for unfamiliar faces (Exp. 1) nor for familiar faces (Exp. 2). This indicates a high internal validity of our method for creating configural and featural face stimuli. Experiment 3 examined whether features placed in their correct categorical relational position but with distorted metrical distances facilitated recognition of unfamiliar faces. These faces were recognized no better than the scrambled faces in Experiment 1, providing further evidence that facial features are stored independently of configural information. From these results we conclude that both featural and configural information are important to recognize a face and argue for a dual-mode hypothesis of face processing. Using the psychophysical results as motivation, we propose a computational framework that implements featural and configural processing routes using an appearance-based representation based on local features and their spatial relations. In three computational experiments (Experiments 4-6) using the same sets of stimuli, we show how this framework is able to model the psychophysical data.
Communication is critical for normal, everyday life. During a conversation, information is conveyed in a number of ways, including through body, head, and facial changes. While much research has examined these latter forms of communication, the majority of it has focused on static representations of a few, supposedly universal expressions. Normal conversations, however, contain a very wide variety of expressions and are rarely, if ever, static. Here, we report several experiments that show that expressions that use head, eye, and internal facial motion are recognized more easily and accurately than static versions of those expressions. Moreover, we demonstrate conclusively that this dynamic advantage is due to information that is only available over time, and that the temporal integration window for this information is at least 100 ms long.
Primates developed the ability to recognize and individuate their conspecifics by the face. Despite numerous electrophysiological studies in monkeys, little is known about the face-processing strategies that monkeys employ. In contrast, face perception in humans has been the subject of many studies providing evidence for specific face processing that evolves with perceptual expertise. Importantly, humans process faces holistically, here defined as the processing of faces as wholes, rather than as collections of independent features (part-based processing). The question remains to what extent humans and monkeys share these face-processing mechanisms. By using the same experimental design and stimuli for both monkey and human behavioral experiments, we show that face processing is influenced by the species affiliation of the observed face stimulus (human versus macaque face). Furthermore, stimulus manipulations that selectively reduced holistic and part-based information systematically altered eye-scanning patterns for human and macaque observers similarly. These results demonstrate the similar nature of face perception in humans and monkeys and pin down effects of expert face-processing versus novice face-processing strategies. These findings therefore directly contribute to one of the central discussions in the behavioral and neurosciences about how faces are perceived in primates.
Humans are experts at shape processing. This expertise has been learned and fine tuned by actively manipulating and perceiving thousands of objects during development. Therefore, shape processing possesses an active component and a perceptual component. Here, we investigate both components in six experiments in which participants view and/or interact with novel, parametrically defined 3D objects using a touch-screen interface. For probing shape processing, we use a similarity rating task. In Experiments 1-3, we show that active manipulation leads to a better perceptual reconstruction of the physical parameter space than judging rotating objects, or passively viewing someone elses exploration pattern. In Experiment 4, we exploit object constancy-the fact that the visual system assumes that objects do not change their identity during manipulation. We show that slow morphing of an object during active manipulation systematically biases similarity ratings-despite the participants being unaware of the morphing. Experiments 5 and 6 investigate the time course of integrating shape information by restricting the morphing to the first and second half of the trial only. Interestingly, the results indicate that participants do not seem to integrate shape information beyond 5 s of exploration time. Finally, Experiment 7 uses a secondary task that suggests that the previous results are not simply due to lack of attention during the later parts of the trial. In summary, our results demonstrate the advantage of active manipulation for shape processing and indicate a continued, perceptual integration of complex shape information within a time window of a few seconds during object interactions.
The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions.
Even though we can recognize faces by touch surprisingly well, haptic face recognition performance is still worse than for visual exploration. One possibility for this performance difference might be due to different encoding strategies in the two modalities, namely, holistic encoding in vision versus serial encoding in haptics. Here, we tested this hypothesis by promoting serial encoding in vision, using a novel, gaze-restricted display that limited the effective field of view in vision to resemble that of haptic exploration. First, we compared haptic with gaze-restricted and unrestricted visual face recognition. Second, we used the face inversion paradigm to assess how encoding differences might affect processing strategies (featural vs. holistic). By promoting serial encoding in vision, we found equal face recognition performance in vision and haptics with a clear switch from holistic to featural processing, suggesting that performance differences in visual and haptic face recognition are due to modality-specific encoding strategies.
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.
Related JoVE Video
Journal of Visualized Experiments
What is Visualize?
JoVE Visualize is a tool created to match the last 5 years of PubMed publications to methods in JoVE's video library.
How does it work?
We use abstracts found on PubMed and match them to JoVE videos to create a list of 10 to 30 related methods videos.
Video X seems to be unrelated to Abstract Y...
In developing our video relationships, we compare around 5 million PubMed articles to our library of over 4,500 methods videos. In some cases the language used in the PubMed abstracts makes matching that content to a JoVE video difficult. In other cases, there happens not to be any content in our video library that is relevant to the topic of a given abstract. In these cases, our algorithms are trying their best to display videos with relevant content, which can sometimes result in matched videos with only a slight relation.