Otwarty dostęp

Sentiment Analysis of Chinese Classic Literary Works Based on Natural Language Processing

  
19 mar 2025

Zacytuj
Pobierz okładkę

Introduction

Literary works are the expression of human emotions, novels, poetry and other literary works are the carrier of such emotional expression [1]. And these emotional expressions are often shown through the characters in the works. Therefore, the study of emotional analysis of literary works is not only a consideration of the characters, but also an understanding and perception of the works. The expression of emotion in literary works can be realized in many ways. For example, authors can show their emotions by describing the characters' words and behaviors [23]. Of course, more ways of expression are through the inner thoughts of the characters. In this case, readers can not only deeply understand the inner feelings of the characters, but also more vividly feel the life experienced by the characters. Emotions in literary works can be divided into positive and negative emotions [4]. The expression of positive emotions can enhance the positivity of the characters and increase the resonance and infectiousness of the readers. And the expression of negative emotion can make the character image more flesh and blood, and enhance the shock and infectious force of the story. The emotion of literary works is often closely related to the plot of the works, and it is also expressed differently according to different genres [5]. Emotion analysis can better understand the literary works and further promote the study of literature [6]. For students and scholars who are interested in studying literature, understanding and analyzing characters' emotions are also very necessary. By deeply understanding the characters' emotions in literary works, the spiritual connotation and emotional resonance of literary works can be better grasped and the best values and meanings can be shown.

There are many classic Chinese literary works, in which the color and expression of emotions are rich and varied, which is also one of the classic features of literary works. For scholars who study the emotions of literary works, Chinese classic literary works must be within the scope of research. This study can not only broaden the knowledge, but also deepen the understanding of human nature and world concepts, so as to go farther and higher in the study of literature.

In this paper, a sentiment analysis model based on two-channel convolutional neural network and Bi-LSTM is proposed, which makes full use of the information of the text by fusing the deep semantic features of words, phrases, and neighboring words. Jia Baoyu in Dream of Red Mansions is taken as the research object for sentiment analysis, and constructing a sentiment view is used to visualize the sentiment of the plot line of the literary work. Then CNN and bidirectional LSTM algorithms are applied to realize the global sentiment discrimination of multiple entities in literary works through sequence recognition. Three Chinese classic literary works, Dream of Red Mansions, White Deer Plain and Thunderstorm, were used as the corpus, and the overall emotional relationship between characters originated from the emotional polarity of each round of dialogues, and the global emotion discrimination between characters was realized by LSTM model.

Overview

Chinese classic literature contains a large number of characters’ emotions and the author’s own emotions, for which the analysis of emotions is a complex process, and often scholars are prone to partial errors in their analysis. In contemporary technology, researchers have made technology to the development of literary emotion analysis, literature [7] used support vector machine to detect and classify the poetic emotion in Punjabi poetry, and the classification accuracy is as high as 70%. Whereas, literature [8] utilized deep learning to classify the emotional state of poetry with an accuracy of even 88%. In addition, literature [9] mined and analyzed the emotions of characters’ dialogues in different novel scenes using long and short-term memory network algorithms and constructed an emotion matrix, which can be mined for the trend of emotion change as well. Similarly, literature [10] showed that the SentiArt tool can well predict character emotions and personality traits in literature.

Currently, there is no study that analyzes the emotions of classic Chinese literature itself; it has classified them purely based on the emotional state, but complex emotions are challenging for classification. While natural language processing can use different computational methods to analyze human language [11], it is feasible to analyze the emotion of language arts in literature, and it has exclusive characteristics for the analysis of language itself, which can more accurately analyze the emotion in literature.

Natural Language Processing
BERT model

Text representation is an important and fundamental part of natural language processing, which serves to convert textual information into a representation that can be processed by a machine, making it easy for machine learning algorithms to understand the textual information. Distributed representation of text can map the text content from its original representation (e.g., a string) to a vector space, i.e., words are converted into fixed-length, continuous, dense vectors, which can capture and characterize the similarity, structure, and semantics among languages. In the following part of this paper, we mainly analyze the BERT model in distributed representation.

BERT (Bidirectional Encoder Representations from Transformers), which can simultaneously obtain a bidirectional representation of the input text, thus constructing a semantically richer textual representation.BERT is better than ELMO in solving the problem of multiple meanings of a word, with the following three key points:

BERT is based on the Encoder model in the Transform architecture, which is characterized by the use of the multi-attention mechanism, which makes it faster to train than recurrent neural networks, and it is able to obtain the global information of the text compared to convolutional neural networks;

Compared with previous language models, BERT is more effective than ELMO in solving the problem of multiple meanings.

Compared with previous language models, BERT has a larger pre-training corpus, and it is pre-trained on a large corpus of unlabeled text from Wikipedia and a book corpus, thus covering more information and acquiring more knowledge;

BERT is a deep bidirectional model, which can obtain the left and right contexts of target words at the same time. For example, given the sentence “we went to the river bank.” and “I need to go to bank to make a deposit.”, if you rely solely on the information above or below to predict the meaning of the word “bank”, there will be at least one error, and the solution is to capture the information of the left context of the “bank” before predicting it.

The modeling framework of BERT is shown in Fig. 1, which uses both Mask Language Model (MLM) as well as Next Sentence Prediction (NSP) in order to train the model more efficiently.

Figure 1.

BERT model framework

With respect to the MLM task, BERT adopted the following practices:

Randomly masking certain words in the input text using the “[MASK]” marker;

Predicting only those words that are masked;

The final hidden vector corresponding to a masked word is treated as an output of Softmax with respect to that word, which is the same operation as in other standard language models.

However, when using the BERT model for fine-tuning, the special marker “[MASK]” used in its pre-training process usually does not appear in the input text sequences, which leads to the problem of inconsistency between the goals of the BERT model in the pre-training phase and the fine-tuning phase, thus greatly limiting the capability of the BERT model. To alleviate the problem of mismatch between the task forms of BERT in the pre-training and fine-tuning phases, the following strategy is used to mask words: the words to be masked are selected using a 15% probability in the input text, and then among the selected words to be masked, 80% probability of replacing the words with “[MASK]” is performed, 10% probability of replacing the masked word with a random word or 10% probability of keeping the word unchanged.

Regarding the NSP task, the aim is to equip the model with the ability to understand the relationship between sentences. Many important downstream tasks require an understanding of the relationship between two text sentences that is not directly accessible through language modeling. Therefore, by predicting whether the next sentence is the next sentence of a given text, the model can better understand the relationship between sentences, and thus improve the model's performance on downstream tasks.The NSP task does this by including two sentences A and B in each pre-training sample that is fed to BERT, and through a binarization task, allowing BERT to determine whether sentence B is the next sentence of sentence A. Inside 50% of all pre-training samples, sentence B is the next sentence of sentence A ; inside 50% of the samples, sentence B is a random sentence from the corpus.

The loss function of BERT is shown in equation (1). L(θ,θ1,θ2)=L1(θ,θ1)+L2(θ,θ2)

It can be seen that its loss consists of two parts. Where, L1(θ, θ1) is the loss function for the MLM task; L2(θ,θ2) is the loss function for the NSP task. More specifically, L1(θ, θ1) and L2(θ, θ2) are calculated as shown in Eq. (2) and Eq. (3), respectively. L1(θ,θ1)=i=1Mlogp(m=mi|θ,θ1),mi[ 1,2,,| V | ] L2(θ,θ2)=i=1Nlogp(n=ni|θ,θ2),ni[ IsNext,NotNext ]

The bi-directional text representation of BERT is obtained by using an integrated fusion approach, while the text representation obtained by ELMO, although also bi-directional, is actually spliced together by two uni-directional text representations, thus BERT is more expressive in text representation. Meanwhile, in the downstream task, BERT uses pre-training plus fine-tuning, which is simpler to use than ELMO's feature-based approach.

In summary, from vector space model to distributed representation model, from TF-IDF to BERT, text representation as the foundation of natural language processing, its technology has been developing gradually, and various methods still have their use so far due to the diversity of natural language processing task goals as well as environments.

Two-channel Convolutional Neural Network and Bidirectional LSTM Model Architecture

Although deep learning models are widely used in text sentiment analysis tasks and have achieved good performance, the current deep learning models are mainly based on word-based neural networks, which have a single perspective and insufficient feature extraction, leading to poor classification performance. On the other hand, since Chinese is a semantically rich language with poor morphosyntax, extracting text features only from the perspective of words will result in the model not being able to accurately learn the semantics of the text. Therefore, in this chapter, two-channel CNN and two-way LSTM are utilized to extract text features from both word vector and word vector perspectives in order to fully learn the semantic information of the text, and then improve the performance of Chinese text sentiment classification.

The structure of the proposed model in this chapter is shown in Figure 2. The overall process of this model is as follows: firstly, the text is preprocessed, and the text is transformed into word vectors and character vectors. Then, one-dimensional convolutional neural network is used to extract the hidden features of the text from both word vectors and word vectors and combine them, and then then the results extracted by the one-dimensional convolutional neural network are used as inputs to BLSTM, which is utilized to learn the sequence features of the text. Finally, the text features learned from the two channels are spliced through the fusion layer and passed through the fully connected layer and classifier for text sentiment analysis. This summary will elaborate on the main components of the model proposed in this chapter: the convolutional layer, the BiLSTM feature extraction layer, and the fusion layer, and further explain the processing flow.

Figure 2.

CNN and BLST model architecture

Convolutional layers

The text enters into the convolutional neural network in the form of word vectors through word separation preprocessing and other processes, and first enters into the convolutional layer for convolutional computation to derive the strongest feature vectors to be input into the maximum pooling layer. It seems that the role of the convolutional layer is feature extraction about its calculation is similar to the calculation of the feed-forward neural network introduced earlier. The text to be processed is denoted by x, then the input text statement can be denoted as x = (x1, x2, ⋯, xn). Let xiRK be a k -dimensional vector corresponding to the ith word in the sentence, and the text can be denoted as xRn×k assuming that the length of the sentence is n. At this point, it is necessary to set up a sliding window, i.e., a convolutional kernel matrix (filter), which is usually a square matrix of size 4×4 when processing image tasks. When dealing with text, assuming that the height of the window is h (which is a hyperparameter need to try to adjust the parameter in the experiment, this paper selected h = 7), the length of the convolution kernel to be equal to the length of the sentence, so you need to decide according to the length of the sentence with the width of the convolution kernel vector to determine whether the sentence should have a filler, the use of 〈pad〉 for supplementation. The convolution process is shown in Fig. 3.

Figure 3.

Convolution process

Then we need to define the convolution kernel according to the step-by-step convolution calculation, set the step size of the convolution kernel for 1, the convolution kernel moves only up and down and will not move left and right, this is to ensure that the output feature vector length is equal, according to the convolution kernel sliding ultimately you can get a number of different convolution kernel vectors (feature map) as the output. Let the length of the input sentence is n, then the sentence can be expressed as (x1x2 ⊕ ⋯ ⊕ xn) so the convolution kernel vector can be derived as equation (4): Ci=f(w(xixi+1xi+h1)+b)

Where f(·) function is the activation function, which has been introduced in Chapter 2, the ReLU function used in this paper. Different featuremap feature vectors C can be obtained by sliding the convolutional kernel window step by step as shown in equation (5). C=(c1,c2,,cnh+1)

BiLSTM feature extraction layer

In the research of text sentiment analysis, there are many models that can extract features from text, such as CNN, LSTM, etc. However, considering that CNN can not solve the problem of long-distance dependence in the sequence, and LSTM can not get the contextual information in the text, in order to comprehensively extract the sequence features of the text, we consider to introduce the BiLSTM model in the feature extraction layer, because the BiLSTM model can not only consider the information from the past to the future, but also consider the information from the never to the past, so the model can learn the semantic features of the text from the positive and negative directions and capture the long-distance-dependent sequence features of the text, which can then be used for the deep global emotion feature extraction. The BiLSTM model in the feature extraction layer captures a more comprehensive representation of the features in the text by utilizing the output vector matrix E in the word embedding layer and then by combining the contextual information of each character in the sequence.

The inputs of the BiLSTM model for each time period are processed by both forward LSTM and reverse LSTM. The forward LSTM gradually updates its hidden and memorized states by traversing the input sequence from front to back, while the reverse LSTM gradually updates its hidden and memorized states by traversing the input sequence from back to front. Then firstly, starting from forward LSTM from the character vector et at any position t on the input sequence word vector, the input et at the current moment and the output state ht–1 of the hidden layer at the previous moment are combined to compute the output state ht of the hidden layer at the current moment, where the obtained output state of the hidden layer is positively denoted ht , and the computational formulas involved are shown in Eqs. (6)-(11): ft=σ(Wf[ ht1,et ]+bf) C˜t=tanh(Wc[ ht1,et ]+bc) it=σ(Wi[ ht1,et ]+bi) Ct=ft*Ct1+it*C˜t ot=σ(Wo[ ht1,et ]+bo) ht=ot*tanh(Ct)

In Eq. ft is the state of the forgetting gate, it is the state of the input gate, and ot is the state of the output gate; Wf, Wi, and Wo are the weight coefficient matrices corresponding to the forgetting gate, the input gate, and the output gate, respectively; bf, bi, and bo are the bias vectors corresponding to the forgetting gate, the input gate, and the output gate; and σ is the Sigmoid function.

Then the reverse LSTM will also be calculated once from the moment t forward, that is, from the input sequence of word vectors on any position t on the character vector et and the previous moment of the hidden layer's output state ht–1 combination of computation can be obtained by the reverse representation of ht , from which we then combine the forward representation of ht and the reverse representation of ht , you can get the complete contextual information of the text, that is, to get the final hidden state of the current moment of t, ht, as shown in Equation (12) shows: ht=htht

Fusion layers

Through the convolutional layer, the two-way LSTM layer obtains two feature vector representations of the text from the perspective of word vectors and word vectors, respectively, and in order to obtain the global feature information, the features learned in the two channels are fused through the fusion layer (Mergelayer). Assuming that the features learned by channel one are S1 and the features learned by channel two are S2, the global feature S is computed by the formula (13) as follows: S=S1S2

Where ⊕ is the splicing symbol, this paper uses the concatenate function in Keras. The fused features are used as inputs to the Dense layer and classifier to realize the text sentiment analysis task.

Emotional Analysis of Chinese Classic Literary Works
Emotional view of a single character

In the emotion view, the corresponding emotion values are extracted based on the emotion dictionary according to the different contents of each character in each chapter. The emotional trend of the characters is expressed in the form of curve graphs, and the emotional values of the characters are plotted as curves according to the order of the chapters, and the trend of the emotional curves is used to express the characters. In this paper, the classic Chinese classical literature “Dream of the Red Chamber” is used as a case study for the attempt of sentiment analysis and visualization.

In order to study the emotional trend of characters in each chapter, a “character-chapter emotional value” curve was plotted with the number of chapters on the horizontal axis and the binary emotional value on the vertical axis. According to the calculation of the emotional value of each character's related text, taking chapters 1-120 as the horizontal axis unit, the value of each point on the curve is the sum of the emotional value in the corresponding text content. Taking Jia Baoyu's emotional value curve as an example (shown in Fig. 4), the curve in the figure fluctuates both above and below the axis where the emotional value is 0, but on the whole, the points above the 0-axis occupy the majority and the absolute value is on the large side. From the trend line can be clearly seen, the first half of Jia Baoyu's emotional value is generally high and mostly positive, in the 20th chapter reached a peak; the middle part of the emotional value fluctuations in the 60th chapter to the lowest point; the second part of the emotional value of the fluctuations again, to the end of the Jafu defeat, the emotional value of the value of far less than 0. Combined with the specific location of the curve and the trend line of the change in the trend trend trend, we can judge that the first part of the emotional value of the Jafu is overall more positive; after a small part of the emotional value of 0 axis is above most points and absolute value is large. Most of the emotions are more positive overall; the latter part of the emotions are more negative overall. Returning to the content of the text, we can find the affective tendency corresponding to the change of the curve, for example, in Chapter 20, which describes the lively scene of the Lantern Festival, Jia Baoyu occupies an important position in this chapter and enjoys the joy of the family reunion, so the affective value is higher; whereas in Chapter 60, which describes that Jia Baoyu is depressed due to a misunderstanding, and that his relationship with Lin Daiyu is in a tense state, so the affective value here is significantly lower.

Figure 4.

Emotion curve view example

Emotional analysis of multiple roles
Comparison of results

In this experiment, we first extract the dialogues occurring between the characters, and manually label the emotions of the dialogues in three novels, respectively, in which Dream of Red Mansions and White Deer Plains are labeled with 3,000 sentences, and Thunderstorm is labeled with 2,000 sentences, and the number of labeled sentences all account for more than 50% of the total dialogues, and then we use the bi-directional LSTM algorithm to train the emotion discriminative model, and judge the sentence-level emotions by setting their values as 1, 0, or -1 The sentence-level emotion judgment accuracy of three of these works is above 85%. Then the number of occurrences of each emotion is counted, which is directly used as the element of the emotion vector of the chapter, and finally the provisional results are grouped by the order of the chapter, which can be expressed as the emotion sequence of the whole text.

In the statistical approach, this paper considers that different sentence emotions have a linear effect on the overall emotion. Since the chapter sentiment polarity consists of vectors, the statistical model simply accumulates for each sentiment separately thus obtaining a global sentiment vector, and in this paper the sentiment represented by the term with the largest value is taken as the global sentiment between characters.

For the model proposed in this paper, the section vectors are first divided into a training set and a test set. The vectors of the training section are used as inputs to the network. In LSTM, the elements of the input vectors are fed in the order of the chapters to extract the developmental pattern of emotions and predict the outcome of the global emotions of the characters. In this, LSTM contains 5 hidden layers, each layer contains 256 units and uses Adam as optimizer.

Tables 1, 2 and 3 show the global emotion matrices between the characters in the books of Dream of the Red Chamber, White Deer Plain and Thunderstorm predicted using the LSTM network, respectively; Positive (Pos) represents that the characters are positive towards the inter-character emotions; Neutral (Neu) represents that the characters are neutral towards the inter-character emotions; Negative (Neg) represents that the characters are negative towards the inter-role emotion is negative. The top is the emotion between characters predicted by the model, the bottom is the standard emotion labeled by the experts, and the blackened squares represent the cases of wrong prediction.

The emotional matrix of the characters in A Dream of Red Mansions

Role1 Role2 Role3 Role4 Role5 Role6 Role7 Role8 Role9
Role1 Pos/Pos Pos/Pos Pos/Pos Pos/Neu Pos/Pos Pos/Pos Pos/Pos Pos/Neu
Role2 Pos/Pos Neg/Neg Neu/Pos Neu/Neu Pos/Pos Neu/Neu Pos/Pos Neg/Neg
Role3 Pos/Pos Neg/Neg Pos/Pos Pos/Neu Neu/Neg Neu/Neu Pos/Pos Neu/Pos
Role4 Neu/Neu Pos/Pos Neg/Neg Pos/Pos Neu/Pos Pos/Pos Pos/Pos Neg/Neg
Role5 Neg/Neu Neu/Neu Neg/Neg Pos/Pos Neu/Neu Neu/Neu Neu/Pos Neu/Neu
Role6 Neu/Neu Neu/Pos Pos/Pos Pos/Neu Pos/Pos Pos/Pos Pos/Pos Neu/Neg
Role7 Pos/Pos Neu/Neu Pos/Neu Neu/Neu Neu/Neg Pos/Pos Neu/Pos Pos/Pos
Role8 Pos/Pos Pos/Pos Pos/Pos Neu/Pos Neu/Neg Neu/Pos Neu/Pos Neu/Neu
Role9 Neu/Neu Neg/Neu Pos/Neu Neg/Neg Pos/Neu Pos/Pos Neu/Pos Pos/Pos

The emotional matrix of the characters in White Deer Plain

Role1 Role2 Role3 Role4 Role5 Role6 Role7
Role1 Pos/Pos Neg/Neg Neu/Pos Neg/Neg Neu/Pos Neg/Neg
Role2 Pos/Pos Neg/Neu Neu/Pos Neu/Neu Pos/Pos Neg/Neg
Role3 Neg/Neg Neu/Neu Neg/Neu Neg/Neg Neu/Neu Neu/Pos
Role4 Neu/Pos Pos/Pos Neu/Neu Pos/Neu Neg/Neg Neu/Neu
Role5 Neg/Neg Neg/Neu Neg/Neg Neu/Neu Neg/Neg Pos/Neu
Role6 Pos/Pos Neu/Neu Neu/Neg Neu/Neu Neg/Neg Pos/Pos
Role7 Neg/Neg Neu/Neg Pos/Pos Neu/Neu Neu/Pos Pos/Pos

The emotional matrix of the characters in thunderstorm

Role1 Role2 Role3 Role4 Role5 Role6 Role7 Role8
Role1 Pos/Pos Neg/Neu Neg/Neu Neg/Neg Pos/Pos Neg/Neg Neg/Neg
Role2 Pos/Pos Neu/Neg Neu/Neu Pos/Pos Neu/Neu Neg/Neg Neg/Neg
Role3 Pos/Neu Pos/Pos Neg/Neg Neu/Neu Neg/Neu Neg/Neg Neu/Neu
Role4 Pos/Neu Pos/Neu Neg/Neg Neg/Neu Neu/Neu Neg/Neg Pos/Pos
Role5 Pos/Pos Neu/Neu Neu/Neg Neu/Neu Neg/Neg Pos/Neu Neu/Neu
Role6 Pos/Neg Pos/Pos Neu/Neu Neg/Neg Neg/Neu Neu/Neu Pos/Pos
Role7 Neg/Neg Pos/Pos Neu/Neu Neu/Pos Neu/Neu Neu/Neu Pos/Pos
Role8 Pos/Pos Neg/Neg Pos/Pos Neu/Neu Neg/Neg Pos/Pos Neu/Pos

Among them, the nine main characters in Dream of Red Mansions are Jia Baoyu, Lin Daiyu, Xue Baochai, Wang Xifeng, Jia Lian, Jia Zheng, Tanchun, and Madame Wang, the seven main characters in White Deer Plains are Bai Jiaxuan, Xiancao, Deer Zilin, Mr. Zhu, Bai Xiaowen, Heiwa, and Tian Xiao'e, and the eight characters in Thunderstorm are Zhou Puyuan, Zhou Fanyi, Lu Shiping, Zhou Ping, the Four Winds, Lu Gui, Zhou Chong, and Lu Dahai. From Table 2, it can be seen that the accuracy of character sentiment discrimination in White Deer Plains and Thunderstorm is higher than that of Dream of Red Mansions, and the reason for this is probably because the number of chapters and characters in the first two is relatively small, and the model complexity is low, and because Dream of Red Mansions is an ancient vernacular language, and the word vectors and character vectors used in this experiment for sentence-level sentiment judgment are based on the training of modern Chinese language. The extraction of textual information may be defective, which affects the accuracy of sentence-level sentiment judgment. Meanwhile, compared with Dream of the Red Chamber and White Deer Plain, the global accuracy rate of the characters in Thunderstorm is significantly higher. Analyzing the reasons for this, it may be because Thunderstorm is a drama, and the whole text consists of dialogues, and the initiators of the dialogues have been marked in the text, which reduces the error rate of the preliminary preparation work; at the same time, as a drama that expresses the social conflict of the family, the characters of Thunderstorm are more accurate compared with those of Chinese literature such as Dream of the Red Chamber and White Deer Plain, and the characters of Thunderstorm are more accurate. As a drama that expresses family and social conflicts, compared to Chinese literature works like Dream of the Red Chamber and White Deer Plain, the emotional expressions of its characters' dialogues are more exaggerated, and from another perspective, the emotional polarity is easy to be derived from the original utterances, so it is more normal to have a higher global emotional accuracy rate in the end.

Analysis of results

The dataset used in this paper is the sentiment statements extracted from Dream of Red Mansions with 4638 data, which contains 2509 positive and 2129 negative data. The model used for the experiment is BERT model with a learning rate of 0.04, which is analyzed by two-channel convolutional neural network and two-way LSTM model algorithm.

Figure 5 shows the test accuracy under different embedding dimensions; Figure 6 shows the test accuracy under different sliding windows, and the observation and analysis shows that when the embedding dimension is 30 and the sliding window size is 15, the test accuracy is the highest, which is 0.817, and the accuracy is high.

Figure 5.

Test accuracy under different embedding dimensions

Figure 6.

Test accuracy under different sliding window sizes

Conclusion

In this paper, we actively explore the sentiment visualization of classic Chinese literary works by analyzing the literary works with sentiment analysis and constructing sentiment views through BERT text analysis method. And under the fusion of CNN and bidirectional LSTM model with Chinese classic literary works as the research object for sentiment analysis, the conclusions are as follows:

The accuracy of sentence-level sentiment judgment in Dream of Red Mansions, White Deer Plains and Thunderstorm is above 85%, indicating that the model has a good prediction function.

The experimental results show that when the embedding dimension is 30 and the sliding window size is 15, the test accuracy is the highest, which is 0.817. It proves that the model has a high accuracy rate on the Chinese dataset. It makes it convenient for readers to have an overall grasp of the direction of the novel's storyline and a clearer understanding of the emotional changes of the characters' roles.

Język:
Angielski
Częstotliwość wydawania:
1 razy w roku
Dziedziny czasopisma:
Nauki biologiczne, Nauki biologiczne, inne, Matematyka, Matematyka stosowana, Matematyka ogólna, Fizyka, Fizyka, inne