Accesso libero

Differences in the recognition of sadness, anger, and fear in facial expressions: the role of the observer and model gender

INFORMAZIONI SU QUESTO ARTICOLO

Cita

Facial expressions are among the most obvious and important indicators of another’s emotional state and play a very important role in everyday social interactions because they reflect intentions and internal states (1).

Numerous studies have examined gender differences in the recognition of various emotional facial expressions, and their results have been inconsistent. A meta-analysis of gender differences in recognition, discrimination, and identification of facial emotional expression by McClure (2) suggests that there is a small but significant female advantage in facial expression processing across development, with gender differences being most pronounced in infancy and early childhood. An early meta-analysis by Hall (3) also supports female advantage in nonverbal sensitivity, including recognition of emotional facial expressions, vocal prosody, posture, and gestures. In addition, a recent meta-analysis (4) with 551 effect sizes from 215 samples also indicated overall advantage of females in emotion recognition tasks. However, the magnitude of gender differences depended on specific emotion, emotion type (positive or negative), gender of the model presenting the emotion, and age of the participants. Although the general conclusion of various meta-analyses and review studies is that females are more efficient in emotion recognition tasks, there is no consensus as to whether this is true for all or only some emotions (5).

In theory, two evolutionary hypotheses provide strong explanations of gender differences in facial expression processing. The attachment promotion hypothesis suggests that females better recognise emotional facial expressions because, as mothers and primary caregivers, they became more sensitive to their children’s laughter, crying, and other nonverbal signs. This evolutionary adaptation enhances the chances of secure attachment in infants, preverbal infants in particular (6). According to this hypothesis, females should outperform males in decoding both positive and negative emotions from facial expressions (6). Many studies have confirmed this assumption (4, 6, 7, 8, 9, 10, 11). The second, fitness threat hypothesis, which was derived from the 1985 primary caretaker hypothesis of Babchuk et al. (12), suggests that females should outperform males in recognising negative emotions, as they have predominantly been responsible for child care throughout hominid evolution and evolved specific adaptations that increase the likelihood of survival of their offspring. Researchers who used negative facial expressions such as sadness, fear, and disgust (13, 14, 15, 16) confirmed female advantage in facial recognition of negative emotions. On the other hand, some studies have found that males have an advantage in recognising anger (13, 14, 17, 18).

From an evolutionary perspective, recognising anger in males (especially in the faces of other males) was very important for survival. Kirouac and Dore (19) confirmed the male advantage in recognising anger, even in situations of short stimulus disposition, while Hager and Ekman (20) have shown that males recognise this emotional expression from a greater distance than any other expression. It has also been confirmed that males express anger more frequently than females (21) and show a greater response in encoding aggression (5), which is consistent with Buck’s interpretation (22) that females tend to suppress anger, whereas males tend to externalise aggression and anger. In addition, anger in a male face has been reported to be perceived more accurately (23) and more quickly (21, 23) than anger in a female face. According to Becker (24), facial discrimination of anger cannot be perceptually separated from autonomic discrimination of gender, suggesting that gender processing is a more ancient perceptual system that promotes automatic interference on expression.

Although the gender of the model expressing the emotion might affect recognition test results (4, 21), reports are contradictory in this respect. Some researchers (25) found that males were better at recognising emotions from facial expressions of male models and females from female models. Collignon et al. (16) confirmed that females were better at recognising facial expressions in females, but Rotter and Rotter (13) reported that males were better at recognising disgust, fear, and sadness in female than male models. This finding can partly be explained by Buck’s hypothesis (22) that, in general, female models express emotions more easily, but in aggressive situations females tend to internalise emotional reactions and males tend to externalise them, which is why anger is easier to recognise when the model is male (21, 25). This contradiction can partly be attributed to methodological differences. One lies in the gender of the model presenting the emotion. Another is the measure used, most often either accuracy or speed (reaction time) of recognition. For example, some studies report female advantage in the speed but not accuracy of emotion recognition (6, 26), some report the opposite, i.e., female advantage in accuracy but not speed of recognition (19), and some report no gender differences in either measure (27). Moreover, some research designs rely on very short facial expression stimuli (9), which apparently gives an advantage to women, whose perceptual speed is better, but those that use long stimuli report no gender differences due to a ceiling effect (6).

In this study, we attempted to overcome these methodological issues by testing gender differences in the recognition of negative facial expressions, as the results of previous studies with positive emotions are rather consistent. Positive emotions are easier to detect and more likely to produce the ceiling effect (4). We also excluded neutral facial expressions, as no gender differences were reported in this respect (6, 28). To further overcome methodological issues, we included both measures of efficiency, i.e. accuracy and speed (reaction time) in emotion recognition, as suggested by Wingenbach et al. (28). We also wanted to see how the gender of the model showing negative emotions affects recognition and observer gender differences. To this end we tested single and interactive effects of model and observer gender.

Consistent with evolutionary hypotheses and previous reports, the aim of our study was to test the following three hypotheses: 1) Male and female observers will differ in speed and accuracy of recognition of negative emotions from facial expressions, that is, females will recognise fear and sadness more accurately and quickly than males, whereas males will recognise anger more accurately and faster than females (in line with the fitness threat and attachment promotion hypotheses).

2) Because females are more emotionally expressive, recognition of negative emotions will be more accurate and faster when the model showing the emotion is female.

3) Gender of the model and observer will interact, that is, male observers will recognise emotions shown by male models more accurately and quickly and vice versa, female observers will recognise emotions shown by female models more accurately and quickly (in line with the evolutionary hypothesis about recognition of opponent’s emotions).

Participants and methods
Participants

The study included 58 undergraduate students (29 male and 29 female) from the University of Zadar. Participation was voluntary, and all participants gave informed consent. The genders did not differ significantly in age (Mmale=22.5; Mfemale=21.2; p>0.05).

The study was approved by the ethics committee of the University of Zadar Department of Psychology.

Task design

The participants were shown colour images of different facial expressions, taken with permission from the Karolinska Directed Emotional Faces (KDEF) database (29). KDEF is a database of 4900 images of faces of amateur male and female actors aged 20–30 years showing various human emotions. Photos show faces at different angles (facing forward at 0 ° or facing left and right at 45 ° and 90 °). The size of the photos was 15 × 20 cm.

We used 210 KDEF images of male and female models showing facial expressions of anger, sadness, and fear at five angles (-90 °, -45 °, 0 °, 45 °, 90 °) and 70 stimuli per emotion. The gender of the model was balanced, so there were 35 male and 35 female images per emotion. The 210 images were divided into 3 (emotions) × 2 (model genders) × 5 (angles) = 30 cells with 7 repetitions of each cell. We chose 7 repetitions per cell to ensure reliable participants’ responses but also that the task does not take too long.

The 210 images were shown in random order on a 17-inch monitor with a 1280 × 1024 resolution and were viewed at a distance of about 70 cm from the monitor.

The task, designed in E-prime 2.0 (Psychology Software Tools Inc., Pittsburgh, PA, USA), was for the participants to press one of the three buttons on the Serial Response Box (Psychology Software Tools Inc.) corresponding to the identified emotion as quickly as possible: button 1 for sadness, button 2 for anger, and button 3 for fear. The remaining two buttons were not used. Below these three buttons we marked the position on which participants were asked to place their forefinger (of the dominant hand) after each response. This marked position was equidistant from all three response buttons. Before the beginning, each participant was trained to use the three response buttons. Each image was displayed until the participant pressed a button, after which followed a 1.5 s pause before moving to the next image.

After the test, we collected output files for each participant, with information on response accuracy and recognition speed (reaction time in milliseconds) for each image. Accuracy was assessed as the frequency (and percentage) of correctly identified images.

Statistical analysis

For statistical analysis we used Statistica, version 12 (StatSoft, Tulsa, OK, USA). It included two dependent variables: response accuracy (frequency of correct responses) and speed of correct recognition measured by reaction time (ms). Both variables were calculated for all face angles.

Before we started to analyse data, we standardised reaction times for each participant/observer and excluded those with a z value greater than 2.5. These longer reaction times were taken as incorrect responses, and all further analyses of reaction times included data for correct responses only, as suggested by Wingenbach et al. (26).

For the two dependent variables (accuracy and speed of recognition) we analysed the variance of 2 observer genders × 2 model genders × 3 emotions (three-way ANOVA), followed by Fisher’s exact test.

Results

Table 1 shows the mean recognition accuracy of sadness, fear, and anger by model and observer gender. Anger and sadness were more accurately recognised by both genders (range 82–91 %) than fear (66–84 %). Reaction times ranged from 1186 to 1649 ms. The fastest was the recognition of anger in a male model by female participants, and the slowest was the recognition of fear in a female model by male participants. Overall, female participants show faster reaction times than males regardless of model gender.

Accuracy (frequency of correct answers and percentage) and speed of recognition (reaction time) of facial expressions of anger, fear, and sadness by participant and model gender

Accuracy Reaction time (ms)
Male participants/ observers Female participants/ observers Male participants/ observers Female participants/ observers
Model gender Emotion Mean SD Mean SD Emotion Mean SD Mean SD
Female Anger 31.07 (89 %) 3.24 30.45 (87 %) 3.03 Anger 1326.75 367.15 1189.39 236.15
Fear 26.72 (76 %) 5.38 29.48 (84 %) 2.61 Fear 1649.73 443.01 1360.08 288.70
Sadness 30.90 (88 %) 4.11 31.86 (91 %) 2.35 Sadness 1453.43 428.83 1281.46 313.74
Male Anger 28.69 (82 %) 2.80 29.24 (84 %) 2.48 Anger 1323.23 387.08 1186.95 241.71
Fear 23.00 (66 %) 5.19 25.07 (72 %) 3.89 Fear 1634.51 444.58 1423.63 315.27
Sadness 29.21 (83 %) 4.21 30.03 (86 %) 2.41 Sadness 1463.81 448.93 1259.24 302.91

SD – standard deviation

Three-way ANOVA points to significant effects of observer gender and type of emotion on reaction time, whereas all other main or interaction effects were not significant (Table 2). With respect to emotion, Fisher’s test showed significant differences (p<0.01) in reaction time between all three facial expressions. Observers of both genders reacted fastest to anger and slowest to fear (Table 1).

Influence of single variables and their interactions on accuracy and speed of recognising facial expressions of anger, fear, and sadness (three-way ANOVA)

Accuracy Speed
Effect F df η2 F df η2
Gender of observer (A) 3.68 1/56 0.06 4.72* 1/56 0.08
Gender of model (B) 81.97** 1/56 0.59 0.25 1/56 0.01
Emotion (C) (anger, fear, sadness) 37.67** 2/112 0.40 61.00** 2/112 0.52
Interaction (A × B) 0.10 1/56 0.00 0.59 1/56 0.01
Interaction (B × C) 6.86** 2/112 0.11 0.67 2/112 0.01
Interaction (A × C) 3.13* 2/112 0.05 2.88 2/112 0.05
Interaction (A × B × C) 0.47 2/112 0.01 1.99 2/112 0.03

*p<0.05; **p<0.01

We found no significant effect of observer gender on accuracy, but the emotion type and the interaction between observer gender and emotion type had a significant effect. Fisher’s test showed no statistical difference in accuracy between male and female participants in recognising anger (p=0.705) and sadness (p=0.274), but female participants were more accurate in recognising fear (p=0.004). This is also evident from Table 1.

Model gender also had a significant effect on accuracy (Table 2), as recognition was more accurate with female than male models (Table 1). The significant interaction between model gender and type of emotion, however, suggest that although all three emotions were more accurately recognised in female models, observer efficiency differed for the three emotions. Fisher’s test confirmed that accuracy in recognising fear was significantly lower than accuracy in recognising anger (pmale model=0.000; pfemale model=0.000) and sadness (pmale model=0.000; pfemale model=0.002) regardless of model gender, whereas accuracy in recognising anger and sadness did not differ significantly (pmale model=.223; pfemale model=.198). Accuracy in recognising fear was higher with female models (Table 1). All other interaction effects were not significant (Table 2).

In summary, female participants were faster in recognising all three facial expressions. They were also more accurate in recognising fear, but not anger or sadness. All three facial expressions were recognised more accurately in female than in male models. These results suggest that not only are females faster at recognising emotional expressions and more accurate at recognizing fear, but they also show these emotions better.

Discussion

Even though the fitness threat hypothesis assumes that males would be faster and more accurate in recognising anger, our results show no male advantage in recognising anger. Instead, female participants recognised all three emotions faster, and fear more accurately than male participants. There were no gender differences in the recognition accuracy of anger and sadness. However, as far as recognition speed is concerned, our results support the fitness threat hypothesis that females have an advantage in recognising negative emotional expressions of fear and sadness as an evolutionary adaptation related to the greater investment of female ancestors in parental care and nurturing of offspring (6, 11). This female advantage has also been found in the speed of recognising anger, which contradicts the fitness threat hypothesis but is consistent with the attachment promotion hypothesis, which postulates that females have an advantage in recognising all emotions from facial expressions. Facial expression of anger signals another person’s negative intentions (17) and threat to which females are more sensitive than males. Using an affective priming paradigm Donges et al. (30) confirmed that women are more sensitive to negative emotional facial expressions when primes are clearly visible. This result also suggests that women allocate more attention to threat-related stimuli than men in situations where these stimuli are clearly visible. Besides the evolutionary explanation, another explanation of the faster recognition of all three negative emotional expressions in our female participants may be related to their better interhemispheric communication (4). Neuroimaging studies also show gender differences in the recruitment of cerebral networks following emotional facial expressions (5). The right inferior frontal cortex is more active in females during recognition, whereas the left temporoparietal area is more active in males, suggesting that males and females use different strategies to process emotions (28). Given the stimulus material used in our study, it is possible that females are faster in recognising facial expressions only if these are intense. The images we used are exactly that, a highly intense emotional facial expressions. Wingenbach et al. (28), however, deny such dependence of gender difference in reaction times on expression intensity. Using short video recordings of ten different facial expressions (including anger, fear, and sadness) in everyday situations at three intensity levels, the authors showed that females were faster in correctly recognising all expressions regardless of their intensity. This result is consistent with our findings and the results of other studies (6, 11, 14, 15) that show female advantage in recognition speed but not accuracy.

Furthermore, considering the experimental design of our study, gender difference in recognition speed but not accuracy was expected. Our research design allowed participants to perform the task at their own pace, i.e., the next stimulus appeared after the participant responded to the previous stimulus, and there was a 1.5 s pause. Participants were also instructed to respond as quickly and accurately as possible while the experimenter observed them. Considering that this research design favours accuracy, the relatively high accuracy of both genders in recognising anger and sadness is most likely owed to the ceiling effect. However, accuracy in recognising fear was significantly lower and, similar to other studies (18), female participants were more accurate. Because of the methodological issue with the current design, recognition speed (reaction time) is a more reliable/sensitive dependent variable. That reaction time is a more sensitive variable when ceiling effects are present in the measurement of accuracy was also confirmed by Coren et al. (31). Because of the greater gender difference in recognition speed than accuracy in our study, we believe that measuring both speed and accuracy is justified because both reflect different aspects of facial cognition (32, 33).

Our other hypotheses relate to the gender of the model representing the emotion. We assumed that observers, regardless of gender, would recognise emotions more accurately and quickly if shown by a female model, because female faces are more expressive (5). Based on the evolutionary hypothesis about recognising opponent’s emotions, we also assumed that emotion recognition might depend on the interaction between model and observer gender. Our results are partly consistent with the first assumption and refute the second –facial expressions presented by female models are recognised with higher accuracy but not faster thanks to greater expressiveness of female faces (3, 5, 32). However, Huang (33), reported stronger electromyographic facial activity (i.e. facial expression) of female participants only when they watched movies that conveyed happiness and fear but not anger. They also subjectively experienced more intense feelings of fear but not anger or happiness. Although all three emotional facial expressions were more accurately recognised in female models in our study, there was also a significant interaction between the gender of the model and the type of emotion. As noted above, accuracy in recognising fear presented by male and female models was significantly lower than accuracy in recognising anger and sadness, and the effect size was quite robust. Lower accuracy in the recognition of fear was more pronounced when the model was male. This is consistent with earlier reports that female facial expressions are more pronounced (33), that fear is more likely to be identified in a female face (25), that response to fear is faster in women (34), and that women are better at remembering fearful (but not neutral or happy) female than male faces (35). It is also possible that the reason why fear was the least accurately recognised in our study has to do with the similarity between the facial expressions of fear and surprise. Of the six basic facial expressions, fear and surprise are the only pair easily confused (36, 37, 38).

Conclusion

Our findings that women recognise facial expressions of fear faster and more accurately than men support the evolutionary fitness threat hypothesis, but this has not been confirmed for anger and sadness, as there are no differences in recognition accuracy between the two genders. We also found that facial expressions were recognised more accurately (but not faster) in female than male models. In other words, not only are females faster at recognising fear, sadness, and anger and more accurate at recognising fear but they also express these emotions in a way that is easier to be recognise by observers of both genders.

We believe that future studies should make use of the advantages observed in our design – most notably, combining speed and accuracy – and carry on by expanding from the three primary, exclusively negative emotions to more complex ones using ecologically valid stimuli from real faces. Future studies should also try to avoid the ceiling effect by employing more difficult tasks.

eISSN:
1848-6312
Lingue:
Inglese, Slovenian
Frequenza di pubblicazione:
4 volte all'anno
Argomenti della rivista:
Medicine, Basic Medical Science, other