Acceso abierto

Fake online review recognition algorithm and optimisation research based on deep learning


Cite

Introduction

In recent years, the development of the e-commerce platform has changed from the initial change of consumer consumption mode to the current change from ‘quantity’ to ‘quality.’ Since 2013, China has been the world's largest online retail market for eight consecutive years. By December 2020, the number of Internet users had reached 782 million, accounting for 79.1% of the total Internet users [1]. Under the development pattern of the domestic cycle as the main body and the domestic and international double cycle as the assist, online consumption has played a driving role in cultivating the new market momentum and boosting the ‘quality’ and ‘quantity’ double upgrade of consumption.

In the current global COVID-19 environment, consumers will increasingly adopt the lifestyle of purchasing products online instead of going for a physical purchase. In the process of online shopping, consumers begin to change their behaviour from simply satisfying their shopping needs to finding the most desirable goods among the numerous options. This is not only a change in the online shopping consumer behaviour, but also a result of the rapid development of the e-commerce platform. At the same time, consumers are also getting used to giving comments on goods after shopping and sharing their shopping feelings with the public [2]. As a result, the number of comments on commodities increases rapidly along with the sharp rise in the volume of commodities.

Amid a sharp rise in the number of online reviews of goods, some of these are starting to get mixed up. From the perspective of interests, merchants hope their product reviews will show the advantages of their products in many aspects, thus resulting in false reviews through the temptation of interests. Some consumers, driven by the interests of the merchants, fabricate their consumption experience of using the product or make extravagant comments on the product [3]. This is not only bad for consumers to make normal shopping decisions, but also positive for the development of the entire e-commerce platform; hence it is urgent to find an algorithm to identify fake e-commerce reviews.

At present, there are some feature-based screening methods, but the prediction effect is still not ideal. Based on the deep learning method, this paper considers both the global and local variables of the text, which improves the accuracy while ensuring the efficiency.

Literature review

The concept of false review was first put forward by Jindal and Liu [4], and later more and more scholars have redefined it. On the one hand, fake reviews are generated by the fake shopping behaviour, that is, the whole order is manipulated by the sales demand of the merchant, so the fake shopping behaviour naturally produces fake reviews. On the other hand, consumers with real product demand make large commodity reviews driven by merchant interests such as ‘cash back on good comments’ after real purchase behaviour. The former needs to identify false comments by combining with reviewers, while the latter only needs to identify comment text. In this paper, the concept of fake reviews is defined as those reviews that cannot bring purchase reference to consumers, including both invalid reviews generated by fake buyers and false reviews generated by ordinary buyers in specific situations. Therefore, relevant research and introduction will be carried out from these two aspects in the following.

Based on the identification of false text-related research

Baberjee and Chua [5] conducted an in-depth study on the process of writing fake reviews to further analyse how fake reviews are generated. They asked volunteers to post fake reviews for a hotel, and then interviewed the volunteers to study the changes in the language and mentality of these fake reviews during the review process, so as to explore the differences between true and false reviews. Then, the supervised learning algorithm was used to identify the fake reviews from the following aspects: Writing style, articulation clarity, description detail and cognitive index, which are compared with the other two methods, and the advantages of this method are confirmed. The text mining technology can greatly improve the accuracy of text information mining. Using this technology, a comment text analysis model is built, in order to excavate comments with important attributes, such as nouns, verbs. The quantifier accounts for the proportion of the whole comment text and the specific percentage of quantifiers and nouns and verbs [6]. The experimental results show that this model has a good effect. Lim et al. [7] point out that since the person who publishes false comments has an interest relationship with the seller, he will only serve the merchant who employs him, and will not serve the merchant who does not pay remuneration. As a result, the reviewer will comment several times on some merchants or certain products, which is called the abnormal reviewer rating behaviour. Based on this assumption, a method to identify false reviews is proposed, which can be used for reviewers’ abnormal evaluation behaviour. This algorithm combines the initial rating bias model with the full rating bias model to account for such behaviour. Zeng uses the bidirectional long short-term memory (LSTM) model coding to get three local representations, and uses self-attention and attention mechanisms to encode three local representations into a global feature representation. The classification results were obtained by the Softmax classifier [8].

Based on research on the identification of false reviewers

Mukherjee created an unsupervised classifier (ASM) on the basis of Bayes formula, combined with the extracted 9-dimension features, including the features mentioned above, and tested the fake comment distributors [9]. The experimental results showed that this method had good performance. Li et al. [10] point out that there are two rules for fake reviewers: one is that multiple fake reviewers comment on commodities at the same time, and the other is that they gather rapidly in a period of time to comment on commodities intensively and actively. According to these two laws, an implicit Markov model with two MOD is proposed to identify false reviewers by combining the characteristics that the number of false reviewers increases rapidly in a certain period of time. According to the Markov random field identification model, Akglu et al. [11] proposed a dichotomy graph, which can reflect the relationship between the commenter and the commodity. The weight of edges in the figure represents the weight of comments. The nodes in the figure are judged by the unsupervised recognition model, so as to identify false reviewers. Rayhana and Akoglu [12] proposed a relational matrix model to describe the relationship among reviewers, reviews and commodities. Based on Markov's random model, the model was identified by combining the behavioural characteristics of the publishers of fake reviews, and the experiment proved that the model had excellent performance. Lu proposed a kind of graph model that can detect fake text and identify fake commentators [13]. It combines each feature of describing comments and reviewers, and simultaneously detects both the comment text and the reviewer. The experiment in the research proves that this algorithm is superior to other reference algorithms in each index.

To sum up, there are some algorithms that can identify false comments, but the recognition procedures are complex and the accuracy is not satisfactory. This paper improves on the algorithm based on deep learning to adapt to the sparse and complexity characteristics of short text. On the premise of eliminating the fake shopping behaviour, the semantic information of the short text is mined, so as to identify both the fake comments generated by the fake shopping behaviour and the exaggerated comments generated by the real shopping behaviour.

Establish fake comment recognition model based on deep learning
The data set

This paper uses the Python software to capture the online review data of a brand of mobile phone products on the Jingdong website. Since there is a lot of duplicate data information in network comments, the first thing to do is to deprocess the data. After reprocessing, the distribution of positive and negative samples is uneven. Unbalanced samples will have a certain impact on the performance of the model, so the up-sampling process should be carried out. Finally, the data set is divided into test set, verification set and training set in a ratio of 1:1:8. Specific data are shown in Table 1.

Data set information

The total number of samples The number of positive samples The number of negative samples

The original data set 38,874 21,213 17,661
The data set after de-duplication 36,674 20,038 16,636
Sampled data set 40,076 20,038 20,038
Data preprocessing

Due to the complex semantics of Chinese text, a series of data preprocessing should be carried out after the above data collection. The deep learning-based Chinese short text data preprocessing process proposed in this paper is shown in Figure 1.

Remove duplicates. The main content of this paper is to identify false comments, but there are many repeated comments in the basic data, which adds unnecessary burden to the false comment recognition model. Therefore, the collected data should be reprocessed.

Data sampling. In the review of data acquisition of a mobile phone brand of Jingdong in this paper, the positive sample data is slightly higher than the negative sample data, and the uneven data distribution will affect the accuracy and operating efficiency of the model. Therefore, the upsampling method is used to fill the negative sample data to achieve the balance of positive and negative samples.

Word segmentation. As word is used as the unit of measurement in the subsequent processing of this paper, the original short text sentences need to be cut and processed. The common word segmentation technology is the Jieba word segmentation tool, which is also used as a reference tool in this paper.

Remove stop words. After word segmentation processing, there are still some words that are not helpful for semantic analysis, and they only play a cohesive role in sentences, such as ‘of’ and ‘the.’ Therefore, we also need to use the Baidu stop words table to remove stop words after word segmentation.

Fig. 1

The flow chart of data preprocessing in Chinese

Fake comment recognition model based on deep learning
Basic introduction to the model
Convolutional Neural Network

Convolutional neural network is a deep multi-layer neural network model composed of regular connections of multi-layer neurons [14], specifically including input layer, hidden layer (convolution layer, pooling layer) and output layer. Since the convolutional neural network has the feature of sparse connection, that is, local connection, each node of the convolutional layer in the model can only be connected to the local region of the upper layer. Therefore, the convolutional neural network can be applied to the recognition model of false comments. Through the design of the convolutional kernel, the information of the text can be extracted, and the semantics of adjacent words of the text can be recognised due to the characteristics of local connection, which is suitable for the application in short text classification and recognition algorithm.

Recurrent Neural Network

Recurrent neural network is also a deep neural network model, but different from the convolutional neural network; it has the function of self-connection between each hidden layer, which enables the information of elements to be transmitted layer by layer, that is, the recurrent neural network has the function of ‘memory’ [14]. Due to the characteristics of its own structure, it makes the processing of sequence information more accurate, so it is also commonly used to process text and sound data.

In the process of text processing, the recurrent neural network not only processes the current word, but also transmits the information of the word to the next word to be processed by weight. Therefore, through a series of weight multiplication, the gradient explosion will occur when |W| > 1, and the gradient will disappear when |W| < 1. To solve these problems, scholars proposed the LSTM and gated recurrent unit (GRU) [16].

Hybrid neural network model

The real product comments are a series of statements made by consumers who have actual shopping needs of the product through the normal shopping process, after receiving the product according to their own using experience and shopping process experience. Since the essence of the product reviews is a collection of short texts generated by buyers, the fake reviews should be identified by buyers and review texts in the construction of the model. Therefore, this paper combines fake commenter identification based on feature engineering and comment text identification based on deep learning to identify fake comments from the whole process of shopping behaviour. The specific model is constructed as shown in Figure 2.

Fig. 2

Hybrid neural network model

Text representation

The short text of an online review is made up of sentences, and each sentence is made up of a number of words, that is, X = [x1,...,xn]. The word vector in the model needs to be initialised, and the normal random distribution sampling is selected to obtain the initial value.

Fake reviewer identification based on feature engineering

The generation of fake reviews not only includes the comments generated by the fake shopping behaviour, but also includes the fake comments generated by the normal shopping behaviour in the evaluation link. Therefore, in addition to using the hybrid neural network model to extract the global and local features of the text, we should first eliminate the comments generated by the fake reviewers through the fake shopping behaviour. Therefore, the following indicators are used to quantify the shopping behaviour of reviewers, and then the false reviewers are identified, so as to improve the overall recognition effect of the model.

Correlation between search keywords and commodity. Consumers search through the search bar, the search keywords should be the same as the final purchase of the product description keywords. However, the reviewers with fake reviews may have no search behaviour or the two behaviours are not consistent in their purchase behaviour.

Time to complete the shopping activity. Normal shopping takes more time from keyword search to placing an order, while fake shopping takes a relatively short time because there is no need for comparison and browsing.

The similar items by browsing other shops. After keyword search in normal shopping, consumers will browse and compare related products. However, the reviewers with fake behaviours will directly purchase the products designated by the merchants, so the number of similar products in other stores is relatively small.

Browsing number of other items purchased in the store. When ordinary consumers lock a pre-purchased product in the process of browsing products, they will enter its store to browse other products in the store, while the reviewers with fake behaviours will not browse during the process of purchasing.

The act of collecting and paying attention to a store. Ordinary consumers will collect relevant products after locking the pre-purchased products, and stores will also pay attention to the products so as to make a second purchase later. But the reviewers of the fake behaviors do not engage in these behaviors because they do not buy the product repeatedly.

User behaviour characteristics

Reviewer buying behaviour characteristics The characteristics of buying behaviour of fake shopping reviewers

The correlation between the search term and the product The correlation between keywords and final products is low
Time to complete the shopping activity Lower browsing time
The similar items by browsing other shop Browse less of the same kind of goods
Browsing number of other items purchased in the store Browsing number of other items purchased in the store is less
The act of collecting goods and paying attention to stores They don’t collect and pay attention
Local text feature representation

The mixed neural network model uses the convolutional network model to represent the local features of text, which is realised through a convolutional kernel YRtd whose width and dimension of word vector are d, and step size is t. Specifically, local features are extracted by the following formula: Nij=f(YXj:j+t1+b) {N_{ij}} = f\left( {Y \odot {X_{j:j + t - 1}} + b} \right) where f is the nonlinear activation function and b is the offset value. In this paper, the Relu function with high universality is selected as the activation function. Each convolution feature is connected by the following formula: Ni=[Ni1,Ni2,,Ni(nt+1)] {N_i} = \left[ {{N_{i1}},{N_{i2}}, \ldots ,{N_{i\left( {n - t + 1} \right)}}} \right]

As shown in the above formula, each convolution kernel will extract nt + 1 local features. In this paper, k convolution kernels are adopted to reduce the feature dimension through the maximum pooling method, extract the optimal feature Nimax of each local feature and connect the optimal features. The specific methods are as follows: Xlocal=[N1max,N2max,Nkmax] {X_{local}} = \left[ {{N_{1\max }},{N_{2\max }} \ldots ,{N_{k\max }}} \right]

Global text feature representation

The global features of the model are extracted by the recurrent neural network. Due to the potential problems of gradient explosion or gradient disappearance caused by the short-term memory function of the recurrent neural network mentioned above, LSTM was adopted in this paper to optimise the model and to avoid the above problems by adding a hidden layer and a gated unit. Bidirectional LSTM, namely BI-LSTM, is to learn the text from the front and back directions, so that the model has a better effect. Specific output status is as follows: hi=F(xi,hi1) {h_i} = F\left( {{x_i},{h_{i - 1}}} \right) where hi is the output state, hi−1 is the output of the previous neuron and xi is the current word vector. F function is the operation on LSTM mentioned in the work of Chung et al. [14], which will not be repeated here. And BI-LSTM is a vector that carries out a bidirectional combination sequence of the above outputs. The specific formula is as follows: Xhidden=[hl1:n,hr1:n] {X_{hidden}} = \left[ {h{l_{1:n}},h{r_{1:n}}} \right]

The next step is to extract the global features of the statement from the hidden layer. Since the forward and reverse output sequences obtained in the previous step only contain the global features, different methods can be selected in the extraction process. In the following paper, the advantages and disadvantages of the extraction method can be judged by comparing the recognition accuracy. Specifically, the global features of the hidden layer are extracted by the following three methods: Xglobal1=[hlend,hrend] {X_{global - 1}} = \left[ {h{l_{end}},h{r_{end}}} \right] Xglobal2=[hlaverage,hraverage] {X_{global - 2}} = \left[ {h{l_{average}},h{r_{average}}} \right] Xglobal3=[hlattention,hraverage] {X_{global - 3}} = \left[ {h{l_{attention}},h{r_{average}}} \right]

The first approach is to take the final state of hl and hr as the global feature, which is later referred to as hybird-e. The second method is to take the average state of hl and hr as the global feature, which is later denoted as hybird-av. The third method uses the algorithm in the attention mechanism model to take the weight of each useful state as the final recognised global feature, which is later referred to as hybird-at [17].

Text classification

Connect the global variable obtained in the previous step to the local variable, and the specific formula is as follows: X=[Xlocal,Xglobal] X = \left[ {{X_{local}},{X_{global}}} \right]

Next, the full connection layer fusion is implemented, and its output is a vector representation containing global and local features. The specific formula is as follows: y=Wn×(XWq)+bn y = {W_n} \times \left( {X \odot {W_q}} \right) + {b_n}

Finally, the Softmax classifier is used to classify the above vectors to achieve the ultimate goal of text recognition. The specific operation is as follows: Pn=exp(yn)ΣnNexp(yn) {P_n} = {{\exp \left( {{y_n}} \right)} \over {{\Sigma _{n^\prime \in N}}\exp \left( {y_{n^\prime}} \right)}}

Model optimisation
Relu activation function

Activation functions are introduced to add a non-linear element to a model that is linear at every level. Since neurons can only be activated when a certain threshold is reached, the selection of Relu function is more consistent with the characteristics of biological neurons [16], and its specific expression formula (12) and function image (Figure 3) are as follows: Relu(x)=max(0,x) {\mathop{\rm Re}\nolimits} lu\left( x \right) = \max \left( {0,x} \right)

Fig. 3

The Relu activation function

Dropout layer

The Dropout layer is set between the full connection layer and the Softmax layer to avoid over-fitting of the model. The specific operation is to discard the neurons probabilistically during the operation of the model, so as to reduce the training parameters and avoid the occurrence of overfitting of the model. The propagation formula of neurons is as follows: riuBernoulli(p) r_i^u \sim Bernoulli\left( p \right) yv=ru*yu {y^v} = {r^u}*{y^u} Ziu+1=Wiu+1yv+biu+1 Z_i^{u + 1} = W_i^{u + 1}{y^v} + b_i^{u + 1} yiu+1=f(Ziu+1) y_i^{u + 1} = f\left( {Z_i^{u + 1}} \right)

The gradient descent method

The model descent method aims to improve the training time of the model and enhance the robustness of the model by selecting a small amount of data several times in the process of training the model. The specific formula is as follows: θi'=θiαk=1n(hθ(xk)yk)xik \theta _i^\prime = {\theta _i} - \alpha \sum\nolimits_{k = 1}^n {\left( {{h_\theta }\left( {{x^k}} \right) - {y^k}} \right)x_i^k}

Experimental results
Evaluation indicators

In the test of all the models in this paper, as shown in the Figure 4, the total amount of data identifying false comments as false comments is A, the total amount of data identifying false comments as true comments is B, the total amount of data identifying real comments as false comments is C, and the total amount of data identifying real comments as true comments is D.

Fig. 4

Indicator diagram

Specific evaluation indicators are as follows:

precision: refers to the proportion of all captured data classified correctly; precision=A+DA+B+C+D precision = {{A + D} \over {A + B + C + D}}

precision_fake: the percentage of data predicted to be false that actually turn out to be false reviews; precision_fake=AA+C precision\_fake = {A \over {A + C}}

recall_fake: refers to the percentage of false review data predicted to be false; recall_fake=AA+B recall\_fake = {A \over {A + B}}

F-s core-fake: the accuracy rate and the recall rate of fake reviews were integrated to evaluate the identification effect of fake reviews; F_score_fake=2*precision_fake*recall_fakeprecision_fake+recall_fake F\_{\rm{score}}\_fake = {{2*precision\_fake*recall\_fake} \over {precision\_fake + recall\_fake}}

precision_true: it refers to the percentage of predicted true comments that turn out to be true; precision_true=DB+D precision\_true = {D \over {B + D}}

recall_true: it refers to the percentage of real review data that is predicted to be true; recall_true=DC+D recall\_true = {D \over {C + D}}

F-s core-true: to evaluate the identification effect of fake reviews by combining the accuracy rate and the recall rate of real reviews; F_score_true=2*precision_true*recall_trueprecision_true+recall_true F\_{\rm{score}}\_true = {{2*precision\_true*recall\_true} \over {precision\_true + recall\_true}}

Model comparison

Since there are many models of the false comment recognition model in the classification algorithm, the accuracy of the model and whether it is suitable for classification of short texts should be measured by the above indexes. In this paper, convolutional neural network and recurrent neural network are combined to construct the model, so it is necessary to compare it with the CNN and RNN models, respectively. The CNN model uses the convolution layer, pool layer and Softmax classifier to recognise the local features of text and achieve the classification effect. The RNN model is divided into the LSTM and BI-LSTM models. Here, comparative tests will be carried out, respectively, to verify the optimisation effect of the bidirectional LSTM model. However, in the process of extracting vectors with hidden global characteristics from the RNN model, there are still three ways of selecting the final state, average state and reference attention mechanism. In order to verify the advantages and disadvantages of the three extraction methods, two methods of final state and average state were also selected in the BI-LSTM model to extract the global features, which were, respectively, referred to as BI-LSTM-e and BI-LSTM-av. Similarly, the hybrid neural network model is also denuded as hybird-e, hybird-av and hybird-at by the three extraction methods, respectively. In addition, the mixed model with feature engineering recognition steps is compared with the model that only mixes the convolutional neural network and recurrent neural network, and the RNN model of the two uses the same method in the process of global feature extraction. This is to verify that the method of identifying and classifying global and local features of short texts using hybrid neural network after eliminating the online comments of false reviewers for feature engineering recognition has better recognition effect.

Results analysis

In this paper, Python language is used to write the models, and Tensor Flow, which is widely used in the field of deep learning, is used as the framework to make horizontal comparison of the models. The hidden state sequence of the recurrent neural network is set at 64, and the performance of the recurrent neural network model is optimal at this time. At the same time, in order to ensure a more significant performance comparison among models, the number of convolution cores at each layer is set to 100. In the process of training the model, it is found that the overall accuracy of the model is negatively correlated with the loss of the model in the early stage of the training process, and they tend to be stable in the later stage. Therefore, in the training process, the model is set randomly to select for a one-time energy test every 200 times. In this way, when the model performance is no longer improved, the results can be found as soon as possible, so as to improve the efficiency, reduce the running time of the model and prevent the occurrence of model overfitting. The recognition effect of the final mixed model and the horizontal comparison of each model are shown in Table 3.

Deep learning model results

The evaluation index The type of model

CNN LSTM BILSTM-e BILSTM-av Hybrid-e Hybrid-av Hybrid-at F-Hybird-av

precision_fake 0.882 0.878 0.880 0.798 0.892 0.893 0.887 0.894
recall_fake 0.901 0.894 0.902 0.832 0.912 0.912 0.909 0.911
identify_fake 0.891 0.886 0.891 0.815 0.902 0.902 0.898 0.902
precision_true 0.897 0.892 0.899 0.829 0.915 0.916 0.914 0.917
recall_true 0.879 0.881 0.878 0.802 0.897 0.899 0.891 0.903
identify_true 0.888 0.887 0.888 0.815 0.906 0.907 0.902 0.910
precision 0.893 0.890 0.892 0.824 0.902 0.905 0.898 0.906

LSTM, long short-term memory.

The above experimental results are comprehensively analysed as follows:

In all the models of transverse comparison, the best recognition effect is the method that combines with feature engineering for eliminating the false reviewers’ comments, then mixing the convolutional neural network and the recurrent neural network starting from the text itself to extract the global and local features. The overall recognition accuracy of the model reached 90.6%, which was not only better than the single neural network, but also better than the mixed neural network without the feature engineering. This indicated that the elimination of the false reviewers’ comments based on feature engineering was beneficial to improve the recognition effect of the model.

From the comparison of the two separate neural network models, it can be seen that the recognition effect of the convolutional neural network is better than that of the recurrent neural network, because of the fact that the convolutional neural network recognises the text through local feature, while the recurrent neural network is through the global feature extraction. However, online comments belong to the category of short text, and the extraction of global features is mainly focused on the emotional tendency of the whole comments, and the contribution advantage to the identification of false comments is not obvious. However, the local characteristics of online commodity reviews are relatively clear, and they often carry out specific evaluations on the attributes of products. Therefore, the extraction of these attributes is more beneficial to distinguish whether the comments are true or false.

For the recurrent neural network, three different methods of extracting global variables have little difference in the final recognition effect of the model. Especially in the two kinds of neural network hybrid model, although the overall recognition of three kinds of model accuracy is higher, the difference between each other is lesser. The reason is that for commodity reviews, the overall length of the text is short, so there is little difference in the extraction of global features whether choosing the average value or the final state. However, relatively speaking, the best performance should be the method of taking the average value, which is also the reason why the hybrid model combined with feature engineering also uses the average extraction method to identify the global features in the part of the recurrent neural network.

According to the comparison between the LSTM and bilateral LSTM models, from the perspective of model recognition effect, BILSTM-E has the best recognition effect and it is similar to the LSTM model. However, the BILSTM model which uses average value to extract global features has a poor performance. This indicates that although the bilateral LSTM model solves hidden dangers such as gradient explosion, the compensation effect of the bilateral model will be weakened due to the short text length of product reviews, and the advantage of the recognition effect will be less significant.

Optimisation of fake review recognition model based on deep learning
Data acquisition

In the above experiment, the input word vector selection of the input layer of the classification model was selected by the random initialisation method. However, this method will lead to the situation that the training time is too long and the semantic meaning of the word vector after the training is not accurate enough in the background of the data scale is not large enough. Therefore, the initial selection of word vectors should be extracted from a more accurate corpus after pre-training.

This paper crawled the corpus related to Baidu Baike mobile phone products through the Python software and took it as the training set of pre-trained word vectors, so as to optimise the model. Through the training of a large number of product-related corpuses, the deep meaning of words and the relationship between word vectors are excavated.

Optimisation algorithm

Word2vec is a word vector generation tool based on deep learning, which is divided into two language patterns: CBOW and skip-gram. The two types have opposite model structures, in which CBOW predicts the semantics of its context by taking a word as an input. Skip-gram, on the other hand, predicts the word by contextual input. In the calculation process, the workload is complicated by calculating the possibility that all the words in the word list appear in the context of the target word, so the algorithm should be optimised by hierarchical Softmax and negative sampling method.

Hierarchical Softmax algorithm

The difference between the hierarchical Softmax algorithm and the Softmax algorithm is that the former uses the characteristics of Huffman tree and the product form of conditional probability to make the probability easy to calculate. The Softmax layer is transformed layer by layer, and the binary logistic regression method is used to fit each conditional probabilityto jpudge whether the target word is in a certain subset. The specific formula is as follows: p(wtDi|context)=11+eudroot*vwt p\left( {{w_t} \in {D_i}|context} \right) = {1 \over {1 + {e^{ - u}}droo{t^{*v}}{w_t}}}

The algorithm aims to transform the neural network into a Huffman tree, in which the nodes of the tree correspond to the nodes of the hidden layer of the neural network, and the word vector of the root node corresponds to the word vector after the projection. By dividing the data set D layer by layer until the last word remains, the calculation amount and runtime time of the whole model are reduced, and the final model recognition efficiency is optimised.

Negative sampling algorithm

Negative sampling method aims to mark words unrelated to the centre words as negative samples, so that only a small part of randomly selected negative sample weights can be updated in the training process, without the need to adjust the corresponding parameters after each training. Such optimisation method greatly reduces the workload of calculation and saves the training time. The specific improvement plan is as follows: p(D=1|w,context)=δ(uw*vcontext) p\left( {D = 1|w,context} \right) = \delta \left( {{u_w}*{v_{context}}} \right) (w,context)DP(D=1|w,context) \prod {_{\left( {w,context} \right) \in D}} P\left( {D = 1|w,context} \right) (w,context)DP(D=1|w,context) \prod {_{\left( {w,context} \right) \notin D}} P\left( {D = 1|w,context} \right) (w,context)DP(D=1|w,context)×(w,context)D(1p)(D=1|w,context) \prod {_{\left( {w,context} \right) \in D}} P\left( {D = 1|w,context} \right) \times \prod {_{\left( {w,context} \right) \notin D}} \left( {1 - p} \right)\left( {D = 1|w,context} \right)

Equations (26) to (29) are the logistic regression representation, likelihood function, negative sample probability and optimisation objective of the sample (w,context). log(δ(uw*vcontext))+wNlog(δ(uw*vcontext)) \log \left( {\delta \left( {{u_w}*{v_{context}}} \right)} \right) + \sum\limits_{w^\prime \in N} {\log \left( { - \delta \left( {{u_w}*{v_{context}}} \right)} \right)} where N is the negative sample set of the sample centre word w. The objective function obtained by optimising the Softmax function simplifies the calculation process and reduces the calculation time.

Optimisation results and analysis

As can be seen from Table 4, after the use of pre-trained word vectors instead of randomised initial word vectors, each recognition model has a better recognition accuracy and classification effect, which indicates that the performance of each model has improved. The best recognition effect is still the hybrid neural network model with the combination of feature engineering at an accuracy of 91.5%, showing an increase of 0.9%. The results show that the pre-trained word vector will affect the performance of the model, and the selection of the initialised word vector will greatly affect the running time and efficiency of the model; it can also mine the semantic features of the text more comprehensively so as to improve the accuracy of false comment recognition.

The results of the deep learning model after the improved training words

The evaluation index The type of model

CNN LSTM BILSTM-e BILSTM-av Hybrid-e Hybrid-av Hybrid-at F-Hybird-av

precision_fake 0.891 0.887 0.892 0.864 0.905 0.902 0.895 0.902
recall_fake 0.905 0.907 0.909 0.850 0.923 0.920 0.912 0.921
identify_fake 0.898 0.897 0.900 0.857 0.914 0.911 0.903 0.898
precision_true 0.902 0.905 0.908 0.853 0.921 0.919 0.919 0.923
recall_true 0.892 0.892 0.894 0.862 0.902 0.903 0.899 0.905
identify_true 0.897 0.898 0.901 0.857 0.911 0.911 0.909 0.897
precision 0.899 0.898 0.901 0.859 0.912 0.913 0.908 0.915

LSTM, long short-term memory.

Conclusion

This paper proposes a hybrid neural network model combining with the characteristics of engineering. It is compared with the convolutional neural network model, LSTM model, bilateral LSTM model and the hybrid model which uses three different methods to extract global features combining the convolutional and recurrent neural networks. They are judged by the evaluation index to determine the effectiveness of recognition. The random initialised word vectors are optimised and referenced in the above models. The experimental results show that the combination of convolutional and recurrent neural networks can better improve the local and global features of text. The addition of feature engineering can eliminate some comments of false reviewers and make the model have better recognition effect. The overall performance of the model is improved to a certain extent after optimisation of the randomly initialised word vector, and the recognition accuracy of the hybrid model proposed in this paper reaches 91.5%. The method proposed in this paper can be well applied in the e-commerce platform shopping environment, to effectively identify the fake reviews generated by the fake reviewers and ordinary buyers driven by interests, so as to prevent fake online reviews to a certain extent, and create a good shopping environment.

eISSN:
2444-8656
Idioma:
Inglés
Calendario de la edición:
Volume Open
Temas de la revista:
Life Sciences, other, Mathematics, Applied Mathematics, General Mathematics, Physics