Stock price forecasting is an eye-catching research topic. In previous works, many researchers used a single method or combination of methods to make predictions. However, accurately predicting stock prices is very difficult. To improve the predicting precision, in this study, an innovative prediction approach was proposed by recurrent substitution of forecast error into the historical neural network model through three steps. According to the historical data, the initial predicted value of the next day is obtained through the neural network. Then, the prediction error of the next day is obtained through the neural network according to the historical prediction error. Finally, the initial predicted value and the prediction error are added to obtain the final predicted value of the next day. We use recurrent neural network prediction methods, such as Long Short-Term Memory Network Model and Gated Recurrent Unit, which are popular in the recent neural network study. In the simulations, the past stock prices of China from June 2010 to August 2017 are used as training data, and those from September 2017 to April 2018 are used as test data. The experimental findings demonstrate that the proposed method with forecast error gives a more accurate prediction result for the stock’s high price on the next day, which indicates that the performance of the proposed one is superior to that of the traditional models without forecast error.

#### Keywords

- stock price prediction
- recurrent neural network
- long short-term memory network
- gated recurrent unit

Financial time series forecasting uses historical data of financial products to establish a predictive model to explore the price fluctuations of financial products, thus guiding investors to make rational investments. Accurate and stable financial forecasting models are crucial for investors to hedge risks and develop money-spinning investment policies. Therefore, studying the prediction of financial time series is of great significance. However, the financial market is a complicated nonlinear dynamic system affected by many elements. It is very elusive to predict financial prices depending on the information obtained.

Traditional financial time series analysis methods, such as the auto-regressive integrated moving average (ARIMA) model [1], auto-regressive conditional heteroskedasticity (ARCH) model, and generalised autoregressive conditional heteroskedasticity (GARCH) model [2, 3], are based on mathematical statistics, with the assumption of stationary assumptions, normal distribution assumptions, etc. Analysis by some well-constructed models requires strict parameters and requires superb modelling skills and rich practical experience. However, due to the many factors affecting the financial market, the financial time series data is very complex, with high noise, non-linearity, non-normal characteristics, etc. As a result of these and other related factors, the traditional analysis methods cannot realise time series analysis in the financial field.

In recent years, as information technology advances, many new methods and new ideas have been provided for financial analysis and forecasting. In the financial time series analysis circle, using data mining research methods and data-driven design models, some shortcomings of traditional time series analysis methods can be overcome by analysing and processing of large-scale data sets.

Meanwhile, a deep learning algorithm achieves tremendous progress in portrait recognition, speech recognition, automatic driving, and other fields. Among all deep learning algorithms, Recurrent Neural Network (RNN) [4, 5] can be considered an ideal financial time sequence analysis algorithm due to its natural sequence structure.

The Long Short-Term Memory Network Model (LSTM) [6,7], as a special variant of the RNN, is often used to process events with long delays or large intervals in time series data. This method has proved its importance in handwritten digit recognition, question answering systems, and speech recognition. In comparison with the traditional RNN, the LSTM model is characterised by selective memory and internal interaction of timing. This feature is very suitable for non-stationary data with the randomness of stock price series. The Gated Recurrent Unit (GRU) [8,9] is like an LSTM but with fewer parameters than LSTM. GRU has better performance on some smaller data sets than LSTM.

In this study, the initial forecast of the stock price is first performed, then the possible error is predicted and then the initial predicted price and the forecast error are combined to obtain the final predicted value. To decrease the errors from historical data, we use RNN prediction methods, such as LSTM and GRU, which are popular in recent years.

The paper is organised as follows. Section 2 presents the methodologies used in this study. The proposed model is introduced in Section 3. Section 4 introduces the experimental results. Section 5 summarises the paper.

Artificial Neural Networks (ANN) [10, 11] is a research hot spot in the AI circle. It simulates the network structure of the human brain, and different network models can be constructed according to different connection methods. It is often referred to directly as the Neural Network (NN).

Multi-Layer Perceptron (MLP) [12,13] refers to a forward-structured artificial neural network. Its function is to map input vectors group to output vectors group, as shown in Figure 1. MLP can be considered to be a directed graph composed of multiple node layers, among which each node connects to the next layer. Every node besides input nodes is a neuron with non-linear activation. A supervised learning method called a back-propagation algorithm is usually used to train MLPs.

In the 1980s, MLP was a popular method with various applications, including image recognition, machine translation, etc. In recent years, deep learning becomes the focus of people’s attention, and MLP attracts people’s attention again.

RNN [4, 5] is an artificial neural network in which the connected nodes form a directed graph along a sequence. The RNN can use its internal state as input, enabling it to display the time series’ temporal dynamic behaviour and thus complete tasks such as handwriting recognition or speech recognition.

RNN is an easy tool with which to process sequence data. The input of the hidden layer comes from the output of both the input layer and the previously hidden layer, as shown in Figure 2. In theory, the RNN can process sequence data of any length. However, in practice, to reduce complexity, the current state is generally assumed to only relate to certain previous states.

In contrast with the traditional machine learning model, hidden layer units are thoroughly equal. The hidden layer in the RNN is a time series from left to right. During the analysis, we often extend the RNN in time to get the structure shown in Figure 3.

The LSTM [6, 7] network is an RNN that can process and predict important events with longer intervals and delays in time series. LSTM is often applied in the technology field in various ways. The systems based on LSTM can learn tasks such as translation language, speech recognition, handwriting recognition, predictive disease, and stock forecasting.

The LSTM and RNN are different in that a ‘processor’ is added to the algorithm to identify useful information in the LSTM case. The structure of this processor is called a unit. The three doors are arranged in a unit named the input gate, the forgetting gate and the output gate, as shown in Figure 4. When messages enter the LSTM network, they will be recognised under the rules. If the information follows the algorithm certification, it will be left. Otherwise, the unmatched information will be discarded through the Forgotten Gate. LSTM is very efficient in solving long-order dependency, which is highly versatile and has many possibilities.

A typical model of LSTM is defined as follows:

_{t}_{t}_{t}_{t}_{t}_{t}_{i}_{f}_{o}_{c}_{i}_{f}_{o}_{c}

GRU [8, 9] is a gating mechanism in a recurrent neural network. The GRU looks like an LSTM carrying two gates but with a fewer number of parameters, as shown in Figure 5. GRU’s performance in music modelling and speech signal modelling is similar to that of LSTM. GRU excels LSTM on the performance of some smaller data sets.

A typical model of GRU is defined as follows:

_{t}_{t}_{t}_{t}_{r}_{z}_{h}_{r}_{z}_{h}

Figure 6 shows the proposed model with the detailed prediction steps as given below:

From input history data {[_{1}, y_{1}_{t}, y_{t}^{p’}_{t+1}

Get the error history {_{1},e_{2},_{t}_{t}_{t}-y^{p’}_{t}

From the error history {_{1},e_{2},_{t}^{p}_{t+1}

Get the final predictive result using the equation: ^{p}_{t+1}^{p’}_{t+1}^{p}_{t+1}

The flow of GRU or LSTM in NN1 is shown in Figure 7. After some preliminary experiments we found this model to be suitable for our problem. GRU/LSTM layer has 30 GRU/LSTM neural units. In the dense layer, the activation function is linear. The output is used as a high price of the next day.

To assess the forecasting effect of the proposed model, the results were tested by some evaluation criteria, which are a Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE) and Mean Absolute Scaled Error (MASE), as shown in Eqs (11)–(13) [14]. The RMSE, MAPE and MASE represent the difference between the values _{t}^{p}_{t}_{t}^{p}_{t}

The Shanghai Composite Index (000001.SH) of China is used as experimental data. The daily data includes six variables, such as opening price, high price, low price, closing price, change of price and a number of transactions.

The data from the previous few days is used as input data, and the high price of the next date is used as the output data. In the emulations, the stock prices from June 2010 to August 2017 are used as training data (as shown in Figure 8) and those from September 2017 to April 2018 are used as test data (as shown in Figure 9).

The experiments were implemented in Python 3.5.3; and measured on a computer with Intel(R) Core(TM) i7-6700 CPU at 3.40 GHz, 8.0 GB RAM, and Microsoft Windows10 Professional 64 bits.

Tables 1–3 show the experiment results. We tested each sample for more than 10 times independently; we obtained the average values of 10 individuals. “Time” refers to the mean test time; the unit of time is in seconds.

The experiment results (NN1: LSTM, DATA SET: 000001.SH).

LSTM | 0.102 | 27.243 | 0.643 | 1.204 | 0.653 |

LSTM,GRU | 0.188 | ||||

LSTM,LSTM | 0.207 | 22.225 | 0.483 | 0.903 | 0.667 |

LSTM,MLP | 0.125 | 23.570 | 0.526 | 0.984 | 0.669 |

The experiment results (NN1: GRU, DATA SET: 000001.SH).

GRU | 0.090 | 28.676 | 0.683 | 1.281 | 0.693 |

GRU,GRU | 0.178 | ||||

GRU,LSTM | 0.193 | 21.319 | 0.450 | 0.840 | 0.703 |

GRU,MLP | 0.112 | 23.559 | 0.517 | 0.968 | 0.710 |

The experiment results (NN1: MLP, DATA SET: 000001.SH).

MLP | 0.023 | 29.953 | 0.729 | 1.368 | 0.640 |

MLP,GRU | 0.114 | 23.964 | 0.521 | 0.973 | |

MLP,LSTM | 0.128 | 0.658 | |||

MLP,MLP | 0.045 | 24.727 | 0.560 | 1.047 | 0.657 |

In Table 1, the NN1 is LSTM, and the NN2 is LSTM, GRU or MLP, respectively. The values of the error-index (RMSE, MAPE, MASE) of proposed models are smaller than that of LSTM. The Dstat value of the proposed models is larger than that of LSTM. Thus, the predicting effect of the proposed method excels that of LSTM. However, the running time of the proposed methods is longer than that of LSTM.

In Table 2, the NN1 is GRU, and the NN2 is LSTM, GRU or MLP, respectively. The values of the error-index of the proposed models are smaller than that of GRU. The Dstat values of the proposed models are larger than those of GRU. However, the running time of the proposed methods is longer than that of GRU.

In Table 3, the NN1 is MLP, and the NN2 is LSTM, GRU or MLP, respectively. The values of the error-index of the proposed models are smaller than that of MLP. The Dstat values of the proposed models are larger than that of MLP. However, the running time of the proposed methods is longer than that of MLP.

Tables 1–3 show that the performance of the proposed method (NN1:GRU, NN2:GRU) is better than others, because GRU has better performance on some smaller data sets than LSTM.

Figure 10 shows a prediction sample of the proposed method (NN1:GRU, NN2:GRU). The blue zigzag line is the actual trend; the red zigzag line is the predicted trend. The proposed method can give a precise prediction.

The Shenzhen Composite Index (399001.SZ) of China is used as experimental data. In the simulations, the stock prices from May 2011 to July 2018 are used as training data, and those from August 2018 to March 2019 are used as test data. The Standard Deviation of data set 399001.SZ is bigger than that of data set 000001.SH, as shown in Table 4.

The standard deviation of experimental data.

000001.SH | 595.8 | 106.6 |

399001.SZ | 1773.1 | 749.3 |

In Table 5, the performance of the proposed method (NN1:GRU, NN2:GRU) is also better than others. On the other hand, the values of the error-index (RMSE, MAPE, MASE) in Table 5 are bigger than those in Table 2 because the Standard Deviation values of data set 399001.SZ are bigger than that of data set 000001.SH.

The experiment results (NN1: GRU, DATA SET: 399001.SZ).

GRU | 0.090 | 127.056 | 1.143 | 1.048 | 0.621 |

GRU,GRU | 0.177 | ||||

GRU,LSTM | 0.109 | 126.870 | 1.092 | 1.007 | 0.620 |

GRU,MLP | 0.023 | 120.234 | 1.050 | 0.961 | 0.615 |

Stock price forecasting is a research hotspot. Accurate prediction of stock prices is difficult. In the proposed model, the initial forecast of the stock price is first performed, then the possible error is predicted and then the initial predicted price and the forecast error are combined to obtain the final predicted value. In the experiment, we used GRU, LSTM and MLP methods in combination. Our proposed model has been proven to be effective based on considering the results of our study, and the proposed method (NN1:GRU, NN2:GRU) shows its superior performance because GRU has better performance on some smaller data sets than LSTM.

In future work, we will further study more parameter settings of the neural networks, such as the number of layers in the network, the nodes number in each layer, the activation function, etc.. Shortening the training time is another problem that needs to be solved.

#### The experiment results (NN1: GRU, DATA SET: 000001.SH).

GRU | 0.090 | 28.676 | 0.683 | 1.281 | 0.693 |

GRU,GRU | 0.178 | ||||

GRU,LSTM | 0.193 | 21.319 | 0.450 | 0.840 | 0.703 |

GRU,MLP | 0.112 | 23.559 | 0.517 | 0.968 | 0.710 |

#### The standard deviation of experimental data.

000001.SH | 595.8 | 106.6 |

399001.SZ | 1773.1 | 749.3 |

#### The experiment results (NN1: GRU, DATA SET: 399001.SZ).

GRU | 0.090 | 127.056 | 1.143 | 1.048 | 0.621 |

GRU,GRU | 0.177 | ||||

GRU,LSTM | 0.109 | 126.870 | 1.092 | 1.007 | 0.620 |

GRU,MLP | 0.023 | 120.234 | 1.050 | 0.961 | 0.615 |

#### The experiment results (NN1: LSTM, DATA SET: 000001.SH).

LSTM | 0.102 | 27.243 | 0.643 | 1.204 | 0.653 |

LSTM,GRU | 0.188 | ||||

LSTM,LSTM | 0.207 | 22.225 | 0.483 | 0.903 | 0.667 |

LSTM,MLP | 0.125 | 23.570 | 0.526 | 0.984 | 0.669 |

#### The experiment results (NN1: MLP, DATA SET: 000001.SH).

MLP | 0.023 | 29.953 | 0.729 | 1.368 | 0.640 |

MLP,GRU | 0.114 | 23.964 | 0.521 | 0.973 | |

MLP,LSTM | 0.128 | 0.658 | |||

MLP,MLP | 0.045 | 24.727 | 0.560 | 1.047 | 0.657 |

Regarding new wave distributions of the non-linear integro-partial Ito differential and fifth-order integrable equations Nonlinear Mathematical Modelling of Bone Damage and Remodelling Behaviour in Human Femur Value Creation of Real Estate Company Spin-off Property Service Company Listing Entrepreneur's Passion and Entrepreneurial Opportunity Identification: A Moderated Mediation Effect Model Applications of the extended rational sine-cosine and sinh-cosh techniques to some nonlinear complex models arising in mathematical physics Study on the Classification of Forestry Infrastructure from the Perspective of Supply Based on the Classical Quartering Method A Modified Iterative Method for Solving Nonlinear Functional Equation New Principles of Non-Linear Integral Inequalities on Time Scales Has the belt and road initiative boosted the resident consumption in cities along the domestic route? – evidence from credit card consumption Analysis of the agglomeration of Chinese manufacturing industries and its effect on economic growth in different regions after entering the new normal Study on the social impact Assessment of Primary Land Development: Empirical Analysis of Public Opinion Survey on New Town Development in Pinggu District of Beijing Possible Relations between Brightest Central Galaxies and Their Host Galaxies Clusters and Groups Attitude control for the rigid spacecraft with the improved extended state observer An empirical investigation of physical literacy-based adolescent health promotion MHD 3-dimensional nanofluid flow induced by a power-law stretching sheet with thermal radiation, heat and mass fluxes The research of power allocation algorithm with lower computational complexity for non-orthogonal multiple access Research on the normalisation method of logging curves: taking XJ Oilfield as an example A Method of Directly Defining the inverse Mapping for a HIV infection of CD4+ T-cells model On the interaction of species capable of explosive growth Research on Evaluation of Intercultural Competence of Civil Aviation College Students Based on Language Operator Combustion stability control of gasoline compression ignition (GCI) under low-load conditions: A review Research on the Psychological Distribution Delay of Artificial Neural Network Based on the Analysis of Differential Equation by Inequality Expansion and Contraction Method The Comprehensive Diagnostic Method Combining Rough Sets and Evidence Theory Study on Establishment and Improvement Strategy of Aviation Equipment Design of software-defined network experimental teaching scheme based on virtualised Environment Research on Financial Risk Early Warning of Listed Companies Based on Stochastic Effect Mode System dynamics model of output of ball mill The Model of Sugar Metabolism and Exercise Energy Expenditure Based on Fractional Linear Regression Equation Constructing Artistic Surface Modeling Design Based on Nonlinear Over-limit Interpolation Equation Optimal allocation of microgrid using a differential multi-agent multi-objective evolution algorithm About one method of calculation in the arbitrary curvilinear basis of the Laplace operator and curl from the vector function Numerical Simulation Analysis Mathematics of Fluid Mechanics for Semiconductor Circuit Breaker Cartesian space robot manipulator clamping movement in ROS simulation and experiment Effects of internal/external EGR and combustion phase on gasoline compression ignition at low-load condition Research of urban waterfront space planning and design based on children-friendly idea Characteristics of Mathematical Statistics Model of Student Emotion in College Physical Education Human Body Movement Coupling Model in Physical Education Class in the Educational Mathematical Equation of Reasonable Exercise Course Sensitivity Analysis of the Waterproof Performance of Elastic Rubber Gasket in Shield Tunnel Impact of Web Page House Listing Cues on Internet Rental Research on management and control strategy of E-bikes based on attribute reduction method A study of aerial courtyard of super high-rise building based on optimisation of space structure Exact solutions of (2 + 1)-Ablowitz-Kaup-Newell-Segur equation