1. bookAHEAD OF PRINT
Détails du magazine
License
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Accès libre

Research on the effect of generative adversarial network based on wavelet transform hidden Markov model on face creation and classification

Publié en ligne: 15 Jul 2022
Volume & Edition: AHEAD OF PRINT
Pages: -
Reçu: 06 Feb 2022
Accepté: 16 Apr 2022
Détails du magazine
License
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Structural analysis of HMM

The model structure is the use of parametric representation of probability model is used to present the statistical characteristics of random process, belong to the double random process, the main content is divided into two parts: on the one hand, refers to the markov chain, on the other hand refers to the general stochastic process, of which the former is mainly used to state transfer, transfer probability can be utilized to performance; The latter can present the relationship between observation sequence and state, which can be represented by observation numerical probability. Because the transformation process of HMM cannot be observed, it is also called hidden Markov model, and its specific structure is shown in Figure 1 below[1] :

Figure. 1

Structure diagram of hidden Markov model

Definition 1

When choosing HMM to conduct research on face recognition authentication, it is necessary to make clear the problems contained therein, which are embodied in the following points: The first is to assess the problem. Under the condition of definite model parameters, the output sequence O=O1, O2... For the probability P (O | λ) of Or, forward-backward algorithm is generally used.

Q=q1, q2... In the case of QR, the forward variable is clear, as shown below: αt(i)=P(O1O2Ot,qt=Si|λ) {\alpha _t}(i) = P\left( {{O_1}{O_2} \ldots {O_t},{q_t} = {S_i}|\lambda } \right)

Theorem 1

In other words, under the condition of definite model, the output observation sequence O=O1, O2... Or, the probability that the state at time t belongs to Si. The derivation process as shown below is used for calculation and analysis:

Initialization processing[2] : at(i)=πibi(O1),1iN {a_t}(i) = {\pi _i}{b_i}\left( {{O_1}} \right),1 \leqslant i \leqslant N

Derivation and analysis: at+1(j)=[ i=1Nat(i)aij ]bj(Ot+1),1tT1,1jN {a_{t + 1}}(j) = \left[ {\sum\limits_{i = 1}^N {{a_t}(i){a_{ij}}} } \right]{b_j}({O_{t + 1}}),1 \leqslant t \leqslant T - 1,1 \leqslant j \leqslant N

Termination: P(O|λ)=i=1Nβ1(i) P\left( {O|\lambda } \right) = \sum\limits_{i = 1}^N {{\beta _1}(i)}

Proposition 2

Secondly, it refers to the decoding problem. Viterbi algorithm needs to be analyzed according to the principle of dynamic programming. Define δt(i)=maxq1q2qt1P[ q1q2qt1,qt=i,O1O2Ot|λ ] \delta _t (i) = \mathop {{\text{max}}}\limits_{q_1 q_2 \ldots q_{t - 1} } P\left[ {q_1 q_2 \ldots q_{t - 1} ,q_t = i,O_1 O_2 \ldots O_t \left| \lambda \right.} \right] , mainly looking for the state sequence δt(i) = represented by the largest at time T, the specific operation is as follows[3] :

Initialization processing: δ1(i)=πibi(O1),1iNφ1(i)=0,1iN \begin{gathered} {\delta _1}({\text{i}}) = {\pi _i}{b_i}\left( {{O_1}} \right),1 \leqslant i \leqslant N \hfill \\ {\phi _1}(i) = 0,1 \leqslant i \leqslant N \hfill \\ \end{gathered}

Derivation and analysis: δt(j)=max1iN δt1(i)aij bj(Oi),2tT,1jNφt(j)=argmax1iN δt1(i)aij ,2tT,1jN \begin{gathered} {\delta _{\text{t}}}(j) = \mathop {\max }\limits_{1 \leqslant i \leqslant N} \left\lfloor {{\delta _{t - 1}}\left( i \right){a_{ij}}} \right\rfloor {b_j}\left( {{O_i}} \right),2 \leqslant t \leqslant T,1 \leqslant j \leqslant N \hfill \\ {\phi _t}(j) = \mathop {arg\max }\limits_{1 \leqslant i \leqslant N} \left\lfloor {{\delta _{t - 1}}\left( i \right){a_{ij}}} \right\rfloor ,2 \leqslant t \leqslant T,1 \leqslant j \leqslant N \hfill \\ \end{gathered}

Termination: P*=max1iN[ δT(i) ]q T*=argmax1iN[ δT(i) ] \begin{gathered} {P^*} = \mathop {\max }\limits_{1 \leqslant {\text{i}} \leqslant N} \left[ {{\delta _T}(i)} \right] \hfill \\ q_T^* = \mathop {arg\max }\limits_{1 \leqslant {\text{i}} \leqslant N} \left[ {{\delta _T}(i)} \right] \hfill \\ \end{gathered}

Calculate S sequence: q t*=φt+1(q t+1*),tT1,T2,,1 q_t^* = {\phi _{t + 1}}\left( {q_{t + 1}^*} \right),t - T - 1,T - 2, \ldots ,1

Lemma 3

Finally, it refers to learning problems. Nowadays, baum-Welch estimation algorithm is the most common method for HMM parameter selection and optimization. This algorithm is an iterative algorithm, which needs to be estimated according to the user’s accumulated parameter experience at the initial stage, and then ensures that each parameter gradually tends to rationalize in continuous iteration, and finally obtains the optimized value. Specific operation steps are as follows[4] :

Initialization processing: π^i=γ1(i) {\hat \pi _i} = {\gamma _1}(i) , under the condition of t=1, is at the expected value of Si, and can be obtained: λ=(π,A0,B0) \lambda = \left( {\pi ,{A_0},{B_0}} \right)

Iterative calculation:

Corollary 4

Assuming that ξt(i, j), represents the probability that the state at moment T is I and the state at moment t+1 is Sj, it can be obtained: ξt(i,j)=P(qt=i,qt+1=j,O|λ)P(O|λ)=at(i)aijbj(ot+1)βt+1(j)i=1Nj=1Nat(i)aijbj(ot+1)βt+1(j) {\xi _{\text{t}}}(i,j) = \frac{{P\left( {{q_t} = i,{q_{t + 1}} = j,O|\lambda } \right)}}{{P(O|\lambda )}} = \frac{{{a_t}(i){a_{ij}}{b_j}({o_{t + 1}}){\beta _{t + 1}}(j)}}{{\sum\limits_{i = 1}^N {\sum\limits_{j = 1}^N {{a_t}(i){a_{ij}}{b_j}({o_{t + 1}}){\beta _{t + 1}}(j)} } }}

Conjecture 5

If γt(i)=Nξt(i,j) {\gamma _t}(i) = \mathop \sum \limits^N {\xi _t}(i,j) , the probability of being in state Si at time t, T1γt(i) \mathop \sum \limits^{T - 1} {\gamma _t}(i) is equal to the expected number of transitions from state Si in the whole process, and T1γt(i,j) \mathop \sum \limits^{T - 1} {\gamma _{\text{t}}}(i,j) is equal to the expected number of jumps from state Si to Sj. The corresponding revaluation formula is as follows: a˜ij=t=1T1ξt(i,j)t=1T1γt(i),b˜j(k)t0,ot=vkTt=1Tγt(j) {\tilde a_{ij}} = \frac{{\sum\limits_{t = 1}^{T - 1} {{\xi _t}(i,j)} }}{{\sum\limits_{t = 1}^{T - 1} {{\gamma _t}(i)} }},{\tilde b_j}(k)\frac{{\sum\limits_{t - 0,{o_t} = {v_k}}^T {} }}{{\sum\limits_{t = 1}^T {{\gamma _t}(j)} }}

Termination: [ logP(O|λ)logP(O|λ0)<ε ] \left[ {\log P\left( {O|\lambda } \right) - \log P\left( {O|{\lambda _0}} \right) < \varepsilon } \right] , where ε represents a preset threshold.

Optimization analysis of observation vector
pretreatment

Nowadays, a lot of face recognition algorithm can get high recognition probability, the reason is that the quality of the captured image is higher, and with the facial expression, posture, the change of the information such as light, application performance of the system will continue to fall, and the lens, the influence of such factors as noise in the image contains also affect the effect of face recognition, Therefore, the pre-processing operation of image information acquisition should be done well before face verification. In this paper, a fixed window face detection method is proposed based on Adaboost algorithm[5] .

Example 6

As the most effective content of the current mode classification problem, boost algorithm needs to obtain a strong separator on the basis of merging a large number of weak classifiers. The classification results of strong classifier are obtained by weighted voting of weak classifier. For the binary classification problem, the encoding of the output variable is. For variable X, the classifier belongs to the prediction data and meets this condition. Then the error formula of the corresponding training sample is shown as follows: err=1Ni=1N1(yih(xi)) \user1{err} = \frac{1}{N}\sum\limits_{i = 1}^N {1\left( {{y_i} \ne h\left( {{x_i}} \right)} \right)}

From the perspective of practical application, the training process of Adaboost algorithm is shown in Figure 2 below:

In this study, the cascade classifier is selected, which can not only control the detection time, but also accurately detect the contained face samples based on the organization of simple and efficient separator, so as to ensure the error rejection rate is zero. The specific structure is shown in Figure 3 below. Because the included classifier is very simple and the actual calculation speed is very fast, only human faces and regions similar to human faces will be left in the final study. Complex classifiers are used in the subsequent research and analysis. Although it takes more time than the previous classifiers, they can distinguish faces more accurately. At the same time, most of the areas have been excluded in the initial analysis, so sophisticated classifiers can be used to complete accurate calculations. The overall decision-making process is a degenerate decision tree, which can also be called cascade[6] .

Figure. 2

Training flow chart of Adaboost algorithm

Figure. 3

Structure diagram of cascade classifier

Feature Extraction
Note 7

On the basis of defining the traditional HMM observation vector acquisition methods, a variety of feature extraction methods are selected to optimize and improve the OBSERVATION vector of HMM. Among them, wavelet analysis, as the main content of engineering discipline and applied mathematics research at the present stage, has formed a relatively perfect mathematical formal system in the practice of inquiry, and the relevant theoretical basis is very solid. By using effect analysis reasonably in image processing, image enhancement, decomposition, fusion, compression and other problems can be effectively dealt with. The function form of wavelet is as follows[7] : ψa,b(t)=1aψ(tba) {\psi _{a,b}}(t) = \frac{1}{{\sqrt a }}\psi \left( {\frac{{t - b}}{a}} \right)

Open Problem 8

In the above formula, A represents the scale parameter and B represents the positioning parameter. Assuming that a is greater than 1,Then the ψa,b(t) function will have the function of stretching; Assuming that 0 is less than a is less than 1,Then the ψa,b(t) function will have the effect of shrinking. Th ψa,b(t) function is constantly changing depending on the scaling parameters A and the scaling parameters B.

E-hmm face authentication with optimized observation vector as the core
Methods

E-hmm represents a pseudo-two-dimensional HMM. All e-Hmm contain a master state, and all master states contain one-dimensional HMM, which means that a group of one-dimensional HMM will be integrated into another group of HMM, so it can be called embedded HMM. The e-HMM model is used to model and analyze the system user’s face, and the face is mainly divided into five super states as shown in Figure 4 below, including several sub-states. It should be noted that the hyperstate model is analyzed according to the vertical direction of the image, while the substate is studied according to the horizontal state of the model. At the same time, multiple feature extraction methods proposed in the above study are used to improve the acquired observation vector, and the overall authentication is completed in combination with the input E-HMM. The specific operation flow chart is shown in Figure 5 below[8] :

Figure 4

Face state division

Figure. 5

Operation flow chart

Training

E-hmm training is studied by maximum likelihood criterion. All people have e-HMM face model and can be trained according to lighting conditions, posture, hairstyle and other content. The specific steps are as follows: First, the observation vector of all people should be obtained; Secondly, the model parameters should be initialized, including the probability distribution and the number of states involved. Again, the parameters after initialization should be adjusted, in other words, the division of various super states and substates on feature vectors should be scientifically optimized. Pay attention to the use of double nested Viterbi algorithm, if in multiple iteration analysis, Viterbi algorithm results are lower than the expected set range, then end the iteration operation, thus completing the initialization process; Finally, the pre-term and post-term algorithm is used to re-estimate the budget of the model parameters, so as to ensure the maximization of the observation vector probability under the condition of a given model, thus ending the training. The specific operation is shown in Figure 6 below[7.8] :

Figure 6

E-HMM training operation flow chart

Face Authentication

After the training, the E-HMM training model can perform face recognition. Combined with the flow chart analysis shown in Figure 7 below, specific operations can be divided into the following points: First, the face image feature vector of the visitor is obtained first; Secondly, Viterbi algorithm is used to calculate the probability of observation vector accurately according to the training model. Third, according to the literature to set conditions to accurately judge whether the visitors meet the certification identity. Under the condition of too many users, the authentication threshold should be optimized to ensure that each user has the best threshold as the authentication condition, and then improve the probability of actual face recognition authentication.

Figure 7

flow chart of face recognition

Experimental analysis
Introduction

Empirical analysis is made on the face recognition authentication technology of generative adverstive network based on wavelet transform hidden Markov model. The CPU of the selected experimental platform is Intel Core Duo T83002.40ghz, the actual memory is DDR 22G, and the operating system is Windows XP. The programming environment is Microsoft Visual Studio6.0. The selected face database is a series of face images taken by The Cambridge Laboratory in the early 1990s, which contains 40 objects of different genders, ages and races. Each object contains 10 images with a size of 92×112, and the image size reaches 48×48 after normalized processing. The total number of images is 400. In addition, the facial expression and posture of the face changed greatly, and the size of the face also changed by 10 percent. At present, ORL database is the most widely used standard database. In practical exploration, the database is divided into two sets with the same number of samples, and the set composed of the first five images of the user belongs to the training set, while the set composed of the remaining five images is the test set. The specific test scheme is shown in Figure 8 below[9.10]:

Figure. 8

Test scheme

Analysis

In the experimental analysis, performance indicators need to be judged from two aspects: on the one hand, the error rejection rate (FRR) and on the other hand, the error acceptance rate (FAR). According to the experimental scheme proposed above, the curve of FRR-FAR can be obtained by using diversified observation vector optimization methods. The specific authentication probability is shown in Table 1 below:

Distribution results of frR-FAR of authentication rate

The threshold value K - L transformation DCT Level of wavelet Gabor feature
−43.4 FRR FAR FRR FAR FRR FAR FRR FAR
−43.6 56.52% 0.75% 39.75% 0.75% 37.25% 0.75% 59.25% 0.25%
−43.8 42.75% 1.00% 32.00% 1.50% 31.75% 1.00% 49.75% 0.75%
−44.0 33.25% 1.25% 23.75% 3.25% 25.50% 2.25% 42.25% 1.25%
−44.2 25.5% 3.25% 16.00% 4.00% 20.50% 4.25% 33.50% 3.25%
−44.4 20.75% 5.25% 11.25% 7.50% 14.75% 7.25% 24.75% 5.50%
−44.6 15.50% 8.75% 8.50% 12.50% 10.50% 9.75% 16.00% 6.25%
−44.8 11.75% 12.50% 7.00% 16.75% 8.75% 13.50% 9.75% 8.25%
−45.0 10.00% 17.25% 5.25% 21.00% 6.75% 18.25% 7.50% 11.75%
−45.2 7.75% 23.25% 4.50% 25.00% 5.25% 23.25% 4.75% 15.00%
−45.4 4.25% 29.75% 2.75% 29.25% 3.25% 26.75% 3.00% 19.25%
−45.6 2.75% 36.00% 1.75% 33.75% 1.75% 31.00% 2.00% 25.50%
−45.8 1.25% 42.50% 1.25% 37.75% 1.25% 35.50% 1.00% 32.75%
−43.4 1.00% 49.25% 0.50% 42.50% 0.75% 40.25% 1.00% 38.75%

Combined with the above results, it can be seen that the authentication error rate obtained by the four observation vector optimization methods can reach 10% on average, among which the wavelet transform method proposed in this paper is more effective. Training at the same time, in order to get more reasonable results, on the basis of the integration of the above research, but also in the experiment was carried out on the training set and testing set scientific adjustment, such as accurate eliminate tilted too serious samples in the training set, etc., and then to adjust after the sample data to experimental analysis, the obtained results are shown in table 2 below:

Comparison results of authentication rate before and after adjustment

Original authentication rate Adjust the certification rate after training set
K - L transformation 12.10% 7.20%
DCT 9.80% 6.05%
Level of wavelet 10.05% 6.20%
Gabor feature 8.90% 4.95%

The authentication performance of the model is improved and meets the requirements of practical application. Therefore, it can be seen that by analyzing the error samples with examples, summarizing the main causes of authentication errors, scientifically adjusting some users’ collection samples according to the conclusion, and removing the imperfect sample data in time, it can not only improve the efficiency of actual authentication, but also ensure the effectiveness of sample authentication. Because different users are affected by light conditions, sample characteristics and other factors, the distribution of actual similarity varies greatly. Different users of the similarity of the overall average value and variance are different, and the use of dynamic threshold adjustment constantly optimize the threshold of all users, scientific construction of authentication experiments, can obtain effective information on the basis of improving the hidden Markov model as the core of the face authentication model performance.

System design and implementation

At present, face authentication technology is not mature, the actual system design directly affects the efficiency of algorithm application, therefore, on the basis of clear application algorithm and model composition, to ensure the perfection and effectiveness of system design, only in this way, to fully show the application value of the core algorithm. Combined with the analysis of the working principle shown in Figure 9 below, it can be seen that the system should first build the image information database, and put forward the storage and training of standard face information. At the same time, the system should be connected with the camera, so as to obtain the face information of entering a specific place, and on the basis of accurate authentication, put forward an alarm to illegal visitors.

Figure 9

Working principle of the system

Conclusion

To sum up, in the innovation and development of social economy and scientific and technological concepts, whether information security or public security, most fields have raised the application requirements of identity authentication technology, and began to build new research models based on the integration of previous development experience and advanced technological concepts. Hidden markov model as the core, on the basis of integration optimization of observation vector, put forward modeling of two-dimensional face image analysis, and on the basis of the model training, using high quality hidden markov models to face verification, preferable dynamic threshold adjustment way, strengthen the authentication model of adaptive level, not only can improve model certification application performance, It also demonstrates the unique value of the applied algorithm. Therefore, in the continuous exploration of identity authentication system and related technologies, we should pay attention to the hidden Markov model, strengthen the strength of practical exploration, define the direction of future scientific research, so as to solve the problems of traditional models, put forward a more effective authentication system.

Figure. 1

Structure diagram of hidden Markov model
Structure diagram of hidden Markov model

Figure. 2

Training flow chart of Adaboost algorithm
Training flow chart of Adaboost algorithm

Figure. 3

Structure diagram of cascade classifier
Structure diagram of cascade classifier

Figure 4

Face state division
Face state division

Figure. 5

Operation flow chart
Operation flow chart

Figure 6

E-HMM training operation flow chart
E-HMM training operation flow chart

Figure 7

flow chart of face recognition
flow chart of face recognition

Figure. 8

Test scheme
Test scheme

Figure 9

Working principle of the system
Working principle of the system

Distribution results of frR-FAR of authentication rate

The threshold value K - L transformation DCT Level of wavelet Gabor feature
−43.4 FRR FAR FRR FAR FRR FAR FRR FAR
−43.6 56.52% 0.75% 39.75% 0.75% 37.25% 0.75% 59.25% 0.25%
−43.8 42.75% 1.00% 32.00% 1.50% 31.75% 1.00% 49.75% 0.75%
−44.0 33.25% 1.25% 23.75% 3.25% 25.50% 2.25% 42.25% 1.25%
−44.2 25.5% 3.25% 16.00% 4.00% 20.50% 4.25% 33.50% 3.25%
−44.4 20.75% 5.25% 11.25% 7.50% 14.75% 7.25% 24.75% 5.50%
−44.6 15.50% 8.75% 8.50% 12.50% 10.50% 9.75% 16.00% 6.25%
−44.8 11.75% 12.50% 7.00% 16.75% 8.75% 13.50% 9.75% 8.25%
−45.0 10.00% 17.25% 5.25% 21.00% 6.75% 18.25% 7.50% 11.75%
−45.2 7.75% 23.25% 4.50% 25.00% 5.25% 23.25% 4.75% 15.00%
−45.4 4.25% 29.75% 2.75% 29.25% 3.25% 26.75% 3.00% 19.25%
−45.6 2.75% 36.00% 1.75% 33.75% 1.75% 31.00% 2.00% 25.50%
−45.8 1.25% 42.50% 1.25% 37.75% 1.25% 35.50% 1.00% 32.75%
−43.4 1.00% 49.25% 0.50% 42.50% 0.75% 40.25% 1.00% 38.75%

Comparison results of authentication rate before and after adjustment

Original authentication rate Adjust the certification rate after training set
K - L transformation 12.10% 7.20%
DCT 9.80% 6.05%
Level of wavelet 10.05% 6.20%
Gabor feature 8.90% 4.95%

Shilong Wu. Research on Wavelet Transform and Embedded Hidden Markov Model in Face Recognition [J]. Journal of Huaibei Normal University: Natural Science Edition, 2013, 034(002): 29–32.WuShilongResearch on Wavelet Transform and Embedded Hidden Markov Model in Face Recognition [J]Journal of Huaibei Normal University: Natural Science Edition20130340022932Search in Google Scholar

Yang Liu, Fanliang Bu. Face recognition based on wavelet transform feature extraction and neural network classification[J]. Journal of Chinese People’s Public Security University: Natural Science Edition, 2010, 16(001): 71–73.LiuYangBuFanliangFace recognition based on wavelet transform feature extraction and neural network classification[J]Journal of Chinese People’s Public Security University: Natural Science Edition2010160017173Search in Google Scholar

Zhiwei Zhang, Fan Yang, Kewen Xia, et al. Research on face recognition methods based on wavelet transform and NMF [J]. Computer Engineering, 2007, 33(006): 176–178.ZhangZhiweiYangFanXiaKewenResearch on face recognition methods based on wavelet transform and NMF [J]Computer Engineering200733006176178Search in Google Scholar

Yong Cheng, Hongjun Rong. Face recognition method and implementation based on wavelet transform and PCA analysis [J]. Journal of Nanjing Institute of Technology: Natural Science Edition, 2005, 3(003): 12–16.ChengYongRongHongjunFace recognition method and implementation based on wavelet transform and PCA analysis [J]Journal of Nanjing Institute of Technology: Natural Science Edition200530031216Search in Google Scholar

Li, Tao and Yang, Wenyin. “Solution to Chance Constrained Programming Problem in Swap Trailer Transport Organisation based on Improved Simulated Annealing Algorithm” Applied Mathematics and Nonlinear Sciences, vol.5, no.1, 2020, pp.47–54. https://doi.org/10.2478/amns.2020.1.00005LiTaoYangWenyin“Solution to Chance Constrained Programming Problem in Swap Trailer Transport Organisation based on Improved Simulated Annealing Algorithm”Applied Mathematics and Nonlinear Sciences5120204754https://doi.org/10.2478/amns.2020.1.0000510.2478/amns.2020.1.00005Search in Google Scholar

Wang, Yuanqing, Zhang, Guichen, Shi, Zhubing, Wang, Qi, Su, Juan and Qiao, Hongyu. “Finite-time active disturbance rejection control for marine diesel engine” Applied Mathematics and Nonlinear Sciences, vol.5, no.1, 2020, pp.35–46. https://doi.org/10.2478/amns.2020.1.00004.WangYuanqingZhangGuichenShiZhubingWangQiSuJuanQiaoHongyu“Finite-time active disturbance rejection control for marine diesel engine” Applied Mathematics and Nonlinear Sciences5120203546https://doi.org/10.2478/amns.2020.1.0000410.2478/amns.2020.1.00004Search in Google Scholar

Zhen Yang, Fei N Xiang-, G Jun, et al. Face detection based on wavelet transform[J]. Journal of Beijing University of Posts and Telecommunications: Natural Science Edition, 2006, 29(3): 114–117.YangZhenN XiangFeiJunGFace detection based on wavelet transform[J]Journal of Beijing University of Posts and Telecommunications: Natural Science Edition2006293114117Search in Google Scholar

Liu Z, Wang H J, Long B, et al. Research on condition trend prediction based on weighed hidden markov and autoregressive model[J]. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2009, 37(10):2113–2118.LiuZWangH JLongBResearch on condition trend prediction based on weighed hidden markov and autoregressive model[J]Tien Tzu Hsueh Pao/Acta Electronica Sinica2009371021132118Search in Google Scholar

Oh S Y, Song J W, Chang W, et al. Estimation and Forecasting of Sovereign Credit Rating Migration Based on Regime Switching Markov Chain[J]. IEEE Access, 2019, PP(99):1–1.OhS YSongJ WChangWEstimation and Forecasting of Sovereign Credit Rating Migration Based on Regime Switching Markov Chain[J]IEEE Access2019PP991110.1109/ACCESS.2019.2934516Search in Google Scholar

Feng Zhao, Jie Zhang. Predicting Research on Default Risk Premium Based on Markov Switching Model[J]. Statistics and Information Forum, 2014, 000(005):54–60.ZhaoFengZhangJiePredicting Research on Default Risk Premium Based on Markov Switching Model[J]Statistics and Information Forum20140000055460Search in Google Scholar

Articles recommandés par Trend MD

Planifiez votre conférence à distance avec Sciendo