Accesso libero

Predicting HR Management Strategies and Employee Satisfaction in Enterprises Based on Deep Generative Models

  
05 feb 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

Introduction

With the rapid development of economic globalization and information technology, enterprises are facing more and more challenges and opportunities. As an important part of enterprise management, human resource management plays a crucial role in the stability and development of enterprises [12]. At the same time, under the new situation, the mobility of talents has increased, diversity and inclusiveness have become the new normal of corporate culture, telework and flexible employment modes are becoming more popular, and the pursuit of work-life balance among employees has become stronger, and all these changes have posed new challenges to the traditional human resource management. Therefore, it is of great significance to discuss in depth how enterprises can adjust and innovate their human resource management strategies under the current situation [35].

Generative models that randomly generate samples by learning the probability density of observable data have received much attention in recent years, and deep generative models that contain multiple hidden layers in the network structure have become a research hotspot with better generative ability [67]. Deep generative models have been successfully applied in computer vision, density estimation, natural language and speech recognition, semi-supervised learning, etc., and provide a good paradigm for unsupervised learning. Enterprise human resource management strategies based on deep generative models can predict employee satisfaction well [89]. The relevant government report points out that “Prioritize the development of education and build a strong human resource country”. It can be seen that the importance of human resources to the organization. In today’s society, high-quality talents are often the core competitiveness of an organization or even an entire industry, especially for enterprises and service agencies carrying out service industries. Human resource management, i.e., HR management, should be highly valued [1012]. As an important indicator of HR management assessment - employee satisfaction is also one of the main elements of modern management. HR management and employee satisfaction are closely related. The assessment of employee satisfaction can reflect the problems of HR management from the side and target to put forward measures to solve the problem. Strengthen the organization’s internal HR management level and service capabilities can effectively enhance employee satisfaction [1315].

As an important branch of unsupervised learning, deep generative models provide an efficient solution for analysing and learning the structural distribution of unlabelled data. Literature [16] attempts to introduce generative artificial intelligence (AI) in human resource management (HRM), aiming to provide an innovative solution for the efficiency and effectiveness of human capital management and verifying the effectiveness of the method through empirical analysis, which does not only improve the efficiency of HRM, but also meets the personal needs, learning styles, and career aspirations of employees, and predicts the future performance of employees. Trends. Literature [17] reviewed the literature on AI and generative AI to provide a theoretical basis for shaping the future of HRM research by linking generative AI to various aspects of HRM processes, practices, relationships, and outcomes from multifaceted perspectives and research avenues. Literature [18] provides an overview of current developments, major challenges and future research directions in generative artificial intelligence (AI) and human resource management (HRM) and also examines future research directions focusing on the application of generative AI to HRM with the aim of improving human resource processes and ensuring the implementation of ethics and fairness.

Employee satisfaction surveys are a more common tool for HRM in organisations. Literature [19] emphasised the importance of employee performance. It examined the relationship between job satisfaction in the variables of human resource management practices and employee performance using intelligent PLS, and the results of the study showed that the impact of human resource management practices on employee performance was significant and positively correlated. In contrast, job satisfaction mediated the relationship between the two. Literature [20] proposed a new perspective of employee satisfaction assessment by using Slovak postal enterprises as an example and examined the influence mechanism between job attributes and socio-demographic characteristics of employee satisfaction and employee loyalty in sustainable human resource management through regression and correlation analysis methods. Literature [21] designed a questionnaire experiment with employees in the Jordanian banking sector, mainly to investigate the level of effectiveness of HRM practices and its impact on employee satisfaction, and the results of the survey showed a significant positive correlation between the effectiveness of HRM and employee satisfaction. Literature [22] based on artificial neural network (ANN) and clustering technology proposed an innovative method for employee departure prediction. The effectiveness of the method was verified through experiments, which can identify key turnover prediction factors so that targeted interventions can be implemented to improve the efficiency and effectiveness of retention policies and provide technical support for the field of human resources (HR) analysis. Literature [23] proposes the use of a purely supervised machine learning approach to classify employee evaluations, which in turn examines the classification and assesses employee satisfaction in order to assist in the implementation of corporate HRM strategies and improve corporate profitability.

In this paper, after first optimising the HR management strategy of the enterprise, the face expression recognition technology is generated on the basis of constructing a deep generation model, which is used to detect the facial expressions and personal emotions of the enterprise employees, and to reflect the employee satisfaction through the employees’ emotions and expressions. At the same time, to verify the accuracy of the recognition method in this paper, recognition detection experiments were carried out in the Cohn-Kanade expression library. It also analyzes the predicted changes in employees’ emotions in different simulated situations under optimized HR management of the enterprise and determines whether employees are satisfied with their jobs based on their positive emotions.

Intelligent Enterprise Human Resource Management Strategy
Data-driven recruitment and selection strategies

Recruitment refers to an enterprise’s initiative to find, attract, and absorb suitable talent through various ways and channels in order to meet its development and operational needs. Selection refers to the recruitment process, through the assessment and screening of candidates’ ability, skills, experience, knowledge, and adaptability, to select the most suitable talent for the needs of the enterprise. Through effective recruitment and selection, companies can improve the quality of talent, enhance competitiveness, and ensure that human resources align with corporate goals and strategies. It can be seen that recruitment and selection is a key step for enterprises to acquire and select human resources, which affects the development level of enterprises, so enterprises should actively carry out recruitment and selection activities. In the era of big data, enterprises can integrate big data technology into the recruitment and selection work, with the help of data-driven recruitment and selection work, so as to play a good recruitment and selection effect and inject fresh blood for their development.

Data-driven employee performance management strategies

Strengthening employee performance management helps to improve employee productivity and performance, cultivate a high-performance culture, identify and reward excellent performance, identify and solve problems in a timely manner, promote employee development and growth, and promote the sustainable progress and competitiveness of the enterprise. In other words, strengthening employee performance management is crucial to promoting enterprise development. In the increasingly competitive social and economic context, enterprises must carry out employee performance management work scientifically [24]. Since big data technology provides more possibilities and convenience for performance management, performance management can be further optimized with the help of big data technology. Among them, enterprises can start from the following perspectives: increase the application of big data technology in employee performance management, and give full play to the value of big data technology in employee performance management, as shown in Figure 1.

Figure 1.

Data driven employee performance management ideas

Data-driven employee training and development strategies

Employee training and development is an important part of human resource management. Enterprises in the development process need to actively carry out employee training activities to help employees improve their working abilities, promote their personal career growth and enhancement, and promote their development. In the era of big data, the use of data-driven to strengthen employee training and development can help enterprises better use data to improve the ability and skills of employees so that employees can be fully developed to adapt to the increasingly changing market demand and technological development. For this reason, enterprises must leverage big data technology to optimize employee training and enhance employee development.

Employee emotion recognition based on deep generative models
Depth Generation Models

Most of the reconstruction-based time series anomaly detection models are based on a self-encoder model, denoted as AE(·), consisting of an encoder E(·) and a decoder D (·). For each sample x in the training set X, the self-encoder model AE(·) is trained by reconstructing the error, such as the mean square error loss function: LAE= xAE(x) 2

Anomalies can be detected at the detection stage by the reconstruction error, which tends to be higher for anomalous samples and lower for normal samples. In order to better model the input data, many variants of the autoencoder have emerged, and the autoencoder and the common variants are described.

Self-encoders

Self-encoder is an unsupervised neural network model whose main purpose is to learn the implicit features of the input data and use the learned implicit features to reconstruct the original input data. These two steps become encoding and decoding, respectively, and the implicit features are called hidden vectors. Intuitively, auto-encoder is similar to feature dimensionality reduction, but it performs better than traditional dimensionality reduction methods such as principal component analysis because it can automatically extract valid features [25]. In addition to this, the new features learnt by the self-encoder can be fed into the downstream supervised model to perform subsequent tasks.

For an input sample x, the hidden vector z=E (x) is first obtained by an encoder, and in general the dimension of z is smaller than the dimension of x. Afterwards, the input sample x^ is reconstructed from z by means of a decoder. In order to make the reconstruction as good as possible, the mean square error between the two is generally minimised: LAE= xD(E(x)) 2

Variational self-encoders

The variational autocoder is one of the most important generative models. Intuitively, VAE transforms real samples through an encoder to an ideal data distribution, such as the standard normal distribution, and then samples from this data distribution before passed to a decoder to get a bunch of generated samples. To measure the similarity of the generated samples to the original input samples, VAE uses KL scatter, as shown in Figure 2. Theoretically, VAE was originally implemented as a neural network for variational inference, which aims to model a complex unknown distribution from samples p (x).

Figure 2.

Variational A utoencoders

VAE does this by optimising a lower bound for p(x): logp(x)logp(x)KL{ q(z|x)p(z|x) }Eq(z|x){ logp(x)+logp(z|x)logq(z|x) }Eq(z|x){ logp(x,z)logq(z|x) }Eq(z|x){ logp(x|z)+logp(z)logq(z|x) }

The last line logp(x)≥Eq(z|x){logp(x|z)+logp(z)–logq(z|x)} of the above equation is also known as the Evidence Distribution Lower Bound (ELBO).

In the variational autocoder, the loss function of the variational autocoder is obtained by modelling the distribution q(z|x) by encoder E(·), modelling the distribution p(x|z) by decoder D(·) and replacing the expectation E with Monte Carlo sampling: LVAE=xX{ logp(x|z)+logp(z)logq(z|x) }

Where the item log p(x|z) is equivalent to maximising the log-likelihood of the Gaussian distribution of the input samples at the output of the decoder, and the item log p(z)–log q(z|x) is equivalent to minimising the KL dispersion of the distribution q (z|x) of z the output of the encoder and the standard normal distribution. So compared to the ordinary two-self encoder, the variational self encoder differs in three main ways: (1) the outputs of the encoder and decoder are Gaussian distributions rather than a vector; (2) the constraints on the hidden variables have been added; and (3) the log-likelihood has been replaced from the reconstruction error.

Adversarial Generative Networks

Adversarial generative networks, like variational inference, are designed to estimate an unknown distribution from samples p(x). Adversarial generative networks do this by starting with a simple random distribution p(z) (typically a standard normal distribution) and then training a mapping from distribution p(z) to p(x), a decoder D(·). In order to make the decoder’s outputs as close as possible to the true distribution, adversarial generative networks also have a discriminator ϕ(·) [26]. The role of the discriminator is to input a sample and output the probability that it is a true sample and not generated by the decoder. The core idea of the adversarial generative network is to let the decoder and the discriminator “against each other”, that is, the decoder output samples to make the discriminator score it as high as possible, and the discriminator to make the decoder output score as low as possible, the real samples score as high as possible. The loss function is: minDmaxϕV(D,ϕ)=Ex~p(x){ logϕ(x) }+Ez~p(z){ log(1ϕ(D(z))) }

When sufficiently trained, the output of the decoder will be indistinguishable to the discriminator, i.e., the purpose of estimating the true distribution p(x) is achieved. However, in anomaly detection, it is necessary to get the corresponding hidden vector Z for a given input sample x, so if only GAN is used, a specific algorithm is needed to search the hidden vector z, which is very inefficient.

Adversarial Self-Encoder

Adversarial Auto-Encoder (AAE) is a variant of an auto-encoder that uses adversarial training. For anomaly detection, the biggest benefit of AAE over GAN is that it can get the hidden vector corresponding to the input sample x directly from the encoder, while at the same time taking advantage of the powerful generative capabilities of GAN. The difference between AAE and a regular self-encoder is that a discriminator replaces the original reconstruction error.

Face Image Generation Based on Deep Generative Models
Prominent Target Detection

The original image I can be represented as: I=λC+(1λ)B

Where C represents the subject of the person in the image, B represents the background and λ controls the transparency of the pixels. The embedding target of the image can be represented as: I=λC+(1λ)B(B)

B represents the fuzzy algorithm. Here in this paper, a Gaussian blurring kernel of 99×99 is chosen. The average image information entropy of the test samples is reduced from 9.27 to 7.03 by the SOD priming and Gaussian blurring, which significantly reduces the stress of the embedding process.

The method proposed in this paper is applicable to any image embedding method. Here in this paper, a scheme for optimising potential coding is used as an example. Starting from an average potential encoding ω¯ , the iteration of ω and embedding loss L can be expressed as: LL(G(ω),I) ωωηF(ωL)

G(·) represents the generator network. The embedded latent codes ω and the embedded images G(ω) are optimised by the optimiser F.

Residual Attention Multilayer Perceptual Machines

Editing of face attributes can be achieved by moving the potential encoding ω in the potential space in the appropriate direction. The local scoring mechanism is an efficient way to adjust the existing orientation. To further optimise this method, this paper combines it with the residual attention MLP module. While the original local scoring mechanism has only been used to optimise linear orientations, the non-linear optimisation implemented in this paper further reduces the distortions generated during the editing process.

Given a pre-trained RA-MLP block, the target latent coding can be represented as: ω+=H(ω)=αF(ω)+(1α)ω

Where F(·) represents the MLP module to be optimised for fine-tuning and α is used to control the magnitude of the attribute change.

Local scoring mechanisms

Use ri(ω) to denote the layer i activation of generator G(ω), and si(x,x+) to denote the average value of downsampling the segmentation mask of the semantic segmentation model of face attributes to the layer i resolution. Analogously, the objective function of this paper can be denoted as: LS(H(ω))=isi(x,x+)| ri(ω)ri(H(ω)) |2i| ri(ω)ri(H(ω)) |2

The optimisation goal of this paper is not to obtain an edit direction for a potential code, but rather a nonlinear transformation network that can move the potential code more accurately.

Face expression recognition based on deep generative models
Face Expression Feature Extraction and Representation

Deep convolutional neural networks have been successfully utilized in image classification tasks to extract discriminative features. The basic idea is to perceive local regions in an image by a large number of neurons and increase the ability of the model to fit nonlinear problems by deepening the model depth [27]. Any face expression image x can be obtained with its corresponding different poses as well as the labelled face image G(x,yp,yp) under different expressions (for the sake of writing convenience G(x,ye,yp) will be abbreviated as G(x) next). Based on the original face image and the newly generated face image, a deep convolutional neural network can be used for feature extraction. The expression feature extraction network contains a total of 16 convolutional layers and 3 fully connected layers. In this network, each face expression image is propagated forward through the convolutional layers as well as the fully connected layers. The length of the output vector of the last fully connected layer is the same as the length of the face expression category in each database. After each convolutional layer, the emotional features of the original human face image x and the newborn human face image G(x) can be represented as: h(x)=Q(Wc*x+bc) h(G(x))=Q(Wc*G(x)+bc)

Where Wc and bc denote the weight parameters as well as the bias terms of the expression feature extraction network, * denotes the convolution operation, and Q denotes the nonlinear activation function. With the deepening of the model layers, the model can learn emotion features with high level semantic information, which can effectively deal with the problem of low emotion recognition rate caused by external factors such as character differences, lighting changes, and posture changes.

Data-based Face Expression Recognition

The robust representation of emotion features can be obtained through the above steps, and the face expression classification task can be performed based on the extracted features. Specifically, a Softmax loss layer is used after the last fully connected layer of the expression feature extraction network to constrain the difference between the real expression categories and the predicted categories of the face image, so that its loss function can be expressed as: Lc(G,C)=Ex,y[ yelogC(h(G(x)),ye)yelogC(h(x),ye) ]

This loss function contains two terms in total. Where the first term represents the cross its loss for the face expression images generated by the generative adversarial network, and the second term is the cross its loss for the original real face expression images. With the help of a large number of generated images, a good deep learning network can be adequately trained, which can be optimised using the Adam optimiser for the above loss function, and when the error loss of the loss function is no longer decreasing, the model training is finished, and an emotion classifier that can be used for multi-pose face expression recognition can be obtained.

According to equation (14) the total loss function of the model can be obtained as: minG,CmaxD,DowαLcon(G)+βTV(G(f(x),y))+Lc(G,C)+Ex,y~pd(x,y)[ logDat(x,y) ]+Ex,y~pd(x,y) [ log( 1Dat(G(f(x)),y) ] +Ef*~prior(f)[ logDi(f*) ]+Ex~pd(x)[ log(1Di(f(x))) ]

Where, α and β denote the coefficients of content consistency loss as well as total variation (TV) loss of the generated image, respectively, which are used to balance the smoothness as well as the resolution of the image. TV(·) denotes the total variation (TV) loss between neighbouring pixel points of the generated image, which can increase the smoothness of the generated image, which can be expressed as: LTV=ch=1CHw,h=1W,H G(x)w+1,h,cG(x)w,h,c + G(x)w,h+1,eG(x)w,h,e

Eq. (16) where ch denotes the number of channels of the image, and W and H denote the width and height of the image, respectively. During the model training process, forward propagation of the face image generation network and the expression feature extraction network will be performed first, followed by backpropagation. During the backpropagation process, the gradient of each network parameter will be computed by the Adam optimisation algorithm.

Employee feedback analysis under corporate HR management strategy

Considering the wide scope of the study, the research object of this paper is mainly based on the general type of employees in all sizes of enterprises. The sample of subjects was mainly obtained from Shanghai, Guangzhou, Jiangsu, Hangzhou, and other regions, with an effective sample of 618 people.

Employee Facial Expression Recognition Detection Experiment
Experimental preparation

In order to verify the effectiveness of the algorithm in this chapter, facial expression recognition experiments were conducted using Cohn-Kanade, an international standard expression test database, which contains six types of expressions, namely, anger, happiness, sadness, surprise, disgust, and fear, each of which consists of a sequence of grey-scale, gradient expression images. Some of these data sequences are selected to be more precise for image sequence experiments. The main objectives of the experiment are (1) to test the degree of effectiveness of the algorithm proposed in this chapter in image sequence recognition. (2) To conduct comparative experiments between the static image recognition method and the sequence image recognition method using the Cohn-Kanade emoji library data and to conduct comparative analysis of the recognition results.

Experiments on Sequence Recognition of Emotional Images

On the Cohn-Kanade emoji library selected, 650 pairs of complete image sequences for cross-validation design are shown in Figure 3. The experimental results indicate that this paper’s algorithm has an average recognition rate of 97.5%. The algorithms in this chapter have a better recognition rate. The error is mainly concentrated in the three expression images of sadness, fear, and disgust.

Figure 3.

Image array recognition rate

Comparison experiment between static image recognition and image sequence recognition

In order to compare the performance of the static image recognition and image sequence recognition modules, experiments are conducted on the Cohn-Kanade expression library on separate segmented image blocks and the overall image, respectively (1) A 16-layer convolutional neural network is designed for the recognition of static images, and the recognition results are shown in Table 1. (2) In this paper, the proposed algorithm is used to recognise dynamic sequence images and the recognition results are shown in Table 2. From the comparison results of Table 1 and Table 2, the main results are: (1) Compared with the average recognition rate of 93.52% for the static image algorithm, the average recognition rate of this paper’s algorithm is 98.72%, which is an improvement of 5 percentage points. (2) The two methods produce a large gap between the error results. The more obvious is the anger expression. Within the static recognition method, the recognition rate of 79.11% is low, while in the dynamic recognition of perceived changes in the state, the recognition rate is 100%.

Static image experiment results based on cohy kanade look library

Expression Test sample number Correct number Average recognition rate(%)
Group 1 Group 2 Group 3 Group 4 Group 5
Anger 11 11 11 11 11 11 79.11%
Revulsion 24 24 24 24 24 24 91.34%
Fear 15 15 15 15 15 15 84.55%
Pleasure 25 25 25 25 25 25 98.21%
Sadness 50 50 50 50 50 50 100%
Surprise 11 11 11 11 11 11 91.81%
Summarize 143 136 136 136 137 137 93.52%

Experimental results of dynamic image based on cohy kanade look house

Expression Test sample number Correct number Average recognition rate(%)
Group 1 Group 2 Group 3 Group 4 Group 5
Anger 25 25 25 25 25 25 100%
Revulsion 26 25 26 24 25 25 94.32%
Fear 35 34 33 35 35 34 95.63%
Pleasure 26 26 26 26 26 26 100%
Sadness 25 24 22 22 21 22 92.57%
Surprise 50 50 50 50 50 50 100%
Summarize 187 184 182 182 182 182 98.72%
Mood Prediction Based on Employees’ Facial Expressions
Survey methodology

Suggestion Behavior Intention Scale: The 10-item Suggestion Behavior Scale is used. Since this study focuses on the behavior of subordinates to their superiors, it is necessary to change the terminology in some questionnaires, such as replacing “colleague” with “boss” or “leader”. A 7-point scoring system is used, with “1” being “absolutely not” and “7” being “absolutely yes”. In this use, the reliability coefficient of the questionnaire was 0.862.

Job satisfaction scale: the employee satisfaction questionnaire is used. The whole questionnaire has a total of 6 items, and the questionnaire adopts a 7-level scoring method, “1” means “very inconsistent”, and “7” means “very consistent”. In this survey, the reliability coefficient was 0.877.

Psychological security scale: The psychological security rating is adopted, and the whole scale includes 5 test items, using a 7-level scoring method, “1” means “very disagreeing”, and “7” means “very consistent”. In this survey, the reliability coefficient was 0.852.

Four groups of employees were set up, and each group of participants followed the same procedure: Baseline Measurement of Instructional Language→ Emotion→ Baseline Measurement of Advice→ Baseline Measurement of Job Satisfaction → Psychological Safety→ Situational Simulation 1→ Emotional Prediction Self-Report 1→ Suggestion Behavior Intention Posttest 1→ Situational Simulation 2→ Mood Prediction Self-report 2→ Suggestion Behavior Intention Posttest 2→ Situational Event Probability Judgment → Demographic Variable Survey.

Test of mood prediction activation effects

Statistics on the frequency of initiation of specific emotions

Table 3 displays the results of a descriptive statistical analysis of the frequency of specific mood word selection.

From the results in the table, it can be seen that the first group of participants initiated positive emotions in the simulation of high involvement in positive situations (scenario 1), in which the activation frequency of “happy” was 152, accounting for the largest proportion of all emotional activations, which was 85.23%, and the activation frequencies of the other two emotions were 15 and 19, respectively, with the proportions of 8.1% and 6.67%, respectively. The frequency of “anger”, “displeasure” and “dissatisfaction” was 55, 91, and 73, respectively, with the largest proportion being “unhappy” (45.33%), followed by “dissatisfaction” (28.56%) and then “anger” (26.11%).

In the second group of participants in the simulation of low involvement in positive situations (scenario 3), the activation frequency of “happy”, “excited” and “happy” was 111, 27, 21, respectively, “happy” accounted for the largest proportion of all emotional primings, which was 72.88%, and the activation proportions of the other two emotions were 16.59% and 10.53%, respectively. The frequency of “anger”, “displeasure” and “dissatisfaction” was 41, 85, and 46, respectively, with the largest proportion being “unhappy” (51.99%), followed by “dissatisfied” (26.66%), and then “angry” (24.35%).

The third group of participants initiated positive emotions in both the high-involvement positive situation (scenario 1) and the low-involvement positive situation (scenario 3), and the “happy” was the most frequently activated in the two scenarios, with 131 (73.54%) and 119 (65.5%), respectively. This was followed by 28 (16.9%) and 32 (18.32%) for “excited” and 22 (12.55%) and 24 (16.18%) for “pleasant”.

The fourth group of participants initiated negative emotions in both the high and low negative situations (Scenario 4) simulations, and the activation frequency of “unhappy” was the highest in the two simulations, which were 83 (44.52%) and 83 (43.21%), respectively, followed by the initiation frequency of “dissatisfaction” (57 (28.03%) and 67 (32.8%), respectively. The activation frequency of “anger” was 51 (27.45%) and 47 (24.11%), respectively.

The above results showed that the four groups of participants could activate the same emotion as the situational valence when performing the situational simulation; that is, the positive situational event simulation could initiate positive emotion prediction, and the negative situational event simulation could initiate negative emotion prediction. On the one hand, this result shows the effectiveness of the situation setting in this study, and on the other hand, it shows that the participants can predict their own positive or negative emotional experience about the events they experience in the future, that is, they can produce positive emotional prediction or negative emotional prediction. As a result, the scenario simulation was set up to achieve the desired effect.

Comparison of the difference in emotional intensity before and after the situation simulation

Using the baseline of the pleasant emotions of the subjects in the four groups as the dependent variable and the group as the independent variable, the differences in the baseline of the subjects’ emotions in the different groups were compared by one-way ANOVA, and the results, as shown in Tables 4 and 5, showed that the baseline of the emotions of the employee subjects in the four groups had homogeneity of variance (F=2.531, p>0.05), and there was no significant difference (F=0.031, p<0.05).

A paired samples t-test was used to compare the difference between each group of subjects’ emotion intensity prediction after the constructed speech situation simulation and their emotion baseline, and the results of the analyses are shown in Table 6. The positive or negative emotion intensity predicted by the four groups of subjects after the constructed speech situation simulation was significantly higher than the baseline emotion. The results of the paired samples t-test indicated that the difference between the two emotion intensity predictions of the four groups’ “subjects” emotion and the baseline emotion reached a significant level difference (p< 0.05). The difference effect value d is in the range of 0.276-0.518, which is a moderate difference effect. It indicates that contextual simulation can effectively initiate mood prediction.

Specific emotional start frequency statistics

Group Prediction scenario Negative emotion(NA) Positive emotion(PA)
Anger Unpleasure Discontent Pleasure Excitation Cheerfulness
Group 1(n=158) Scenario 1 0 0 0 152 (85.23%) 15 (8.1%) 19 (6.67%)
Scenario 2 55 (26.11%) 91 (45.33%) 73 (28.56%) 0 0 0
Scenario 3 0 0 0 111 (72.88%) 27 (16.59%) 21 (10.53%)
Scenario 4 41 (24.35%) 85 (51.99%) 46 (26.66%) 0 0 0
Group 2(n=144) Scenario 1 0 0 0 131 (73.54%) 28 (16.9%) 22 (12.55%)
Scenario 3 0 0 0 119 (65.5%) 32 (18.32%) 24 (16.18%)
Scenario 2 51 (27.45%) 83 (44.52%) 57 (28.03%) 0 0 0
Scenario 4 47 (24.11%) 83 43.21%) 67 (32.68%) 0 0 0

The variance of the emotional baseline is tested

F df1 df2 sig
2.531 4 567 0.095

Analysis of emotional baseline differences

Sum of squares df Mean square F sig
Intergroup 0.503 5 0.163 0.031 0.958
Within group 813.312 561 1.563 - -
Total amount 814.253 568 - - -

The difference between the test emotion intensity

Group Emotional intensity prediction Emotional baseline t d
M SD M SD
Group 1(n=158) PA1 4.817 1.244 4.241 1.237 3.864** 0.518
NA2 4.75 1.674 3.452** 0.355
Group 2(n=144) PA1 4.745 1.405 4.278 1.165 2.631** 0.351
NA2 4.77 1.651 2.225** 0.274
Group 3 (n=152) PA1 4.829 1.265 4.373 1.152 2.912** 0.314
PA2 4.81 1.347 2.121** 0.276
Group 4 (n=164) NA1 4.873 1.696 4.352 1.413 2.543** 0.368
NA2 4.83 1.547 2.751** 0.388

Note: indicates at p<0.01 (two-sided),

“*” indicates at p<0.05 (two-sided); d is Cohen’s d, which indicates the effect value in the t-test.

Sequential effects test for mood prediction

Since each group of subjects had to go through two mental simulations of situational events, in order to avoid the effect of the simulation order of the constructed situation on emotion prediction, half of the subjects in each group were simulated in the order of AB from the constructed situation, and the other half were simulated in the order of BA. The test of the effect of the order of emotion prediction mainly used independent samples t-test and the test results are shown in Table 7. As can be seen from the table, there is no significant difference (p>0.05) in the subjects’ positive or negative emotion intensity prediction and duration prediction in each group of context combinations with different context order combinations, indicating that the order of the contexts did not affect the subjects’ prediction of the intensity and duration of the emotions in this investigation.

Test of sequence effect of emotional prediction

Group Situation Emotional type Situational sort N Intensity prediction Duration prediction
M SD t sig M SD t sig
Group 1(n=158) Scenario 1 PA1 A 79 4.955 1.235 1.757 0.448 4.41 1.83 1.744 0.159
B 79 4.678 1.228 3.96 1.988
Scenario 2 NA2 A 79 4.74 1.752 0.048 1.276 4.9 1.925 1.681 0.174
B 79 4.76 1.602 4.46 1.988
Group 2(n=144) Scenario 3 PA1 A 72 4.672 1.374 -0.512 0.864 4.21 1.877 0.336 0.885
B 72 4.817 1.44 4.15 1.833
Scenario 4 NA2 A 72 4.8 1.632 0.369 1.154 5.18 1.911 1.34 0.329
B 72 4.8 1.632 4.83 1.826
Group 3 (n=152) Scenario 1 PA1 A 76 4.912 1.293 1.073 0.69 3.73 1.728 -0.752 0.434
B 76 4.745 1.236 3.97 1.687
Scenario 2 PA2 A 76 4.86 1.246 0.586 0.992 3.46 1.985 -1.139 0.26
B 76 4.79 1.188 3.83 1.97
Group 4 (n=164) Scenario 3 NA1 A 82 4.919 1.684 0.479 1.071 4.65 2.057 -1.23 0.23
B 82 4.827 1.719 5.07 1.867
Scenario 4 NA2 A 82 4.88 1.56 0.494 1.059 4.81 1.683 -0.306 0.719
B 82 4.79 1.542 4.92 1.639
Conclusion

The study proposes a deep generative model-based technique for facial expression recognition in enterprise HR management, and empirically analyzes it on the Cohn-Kanade expression library. The recognition algorithm in this paper is experimented on Cohn-Kanade expression image sequences and achieves a recognition rate as high as 98.25%, which shows that the face expression recognition method in this paper is effective and has high accuracy. Secondly, 618 experimental cases were selected to reflect the prediction of employee satisfaction through face recognition detection technology, and it was found that after the optimisation of human resource management, the positive emotions of employees in different scenarios of simulated events were higher, and there were significant differences, indicating that the employees have a better emotional response to their work satisfaction under the optimisation strategy of human resource management.

Lingua:
Inglese
Frequenza di pubblicazione:
1 volte all'anno
Argomenti della rivista:
Scienze biologiche, Scienze della vita, altro, Matematica, Matematica applicata, Matematica generale, Fisica, Fisica, altro