1. bookAHEAD OF PRINT
Détails du magazine
License
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Accès libre

Application of Nonlinear Fractional Differential Equations in Computer Artificial Intelligence Algorithms

Publié en ligne: 15 Jul 2022
Volume & Edition: AHEAD OF PRINT
Pages: -
Reçu: 19 Feb 2022
Accepté: 25 Apr 2022
Détails du magazine
License
Format
Magazine
eISSN
2444-8656
Première parution
01 Jan 2016
Périodicité
2 fois par an
Langues
Anglais
Introduction

Artificial intelligence algorithms accumulate a large amount of user data through such network platforms, the values of AI algorithm controllers (and in many cases developers) are thus deeply embedded in the supposedly neutral technology, and quietly affect the audience's ideas, decisions and behavior patterns. Through continuous machine learning, algorithms have achieved huge advantages over humans in data capture, learning, and push, and thus strengthen the impact on the audience. Including platform media such as Toutiao and social platforms such as WeChat, are actively using artificial intelligence algorithms to increase user stickiness, and increase market penetration. It is different from the news manual filtering and pushing mechanism in the traditional media era, and it is also different from the news social filtering mechanism in the social network era, AI algorithm recommendation mechanism in the era of big data, with stronger data capture ability and learning ability, the scale and efficiency of information push increase exponentially.

The advent of artificial intelligence algorithms has revolutionized this. Relying on the application of big data technology, the artificial intelligence algorithm is based on the user's behavior data - such as browsing content, forwarding, comments, etc., and deep machine learning and algorithm analysis of identity data, accurately identify and push information about value needs and related preferences for each user, that is, “Only what you pay attention to is the headline.” On this basis, artificial intelligence algorithm recommendations are based on different value preferences and information needs, divide users into multiple overlapping groups, and push the required information for them respectively, realize big data filtering which is different from Moments filtering, so that users can control the capture and reception of information more freely [1]. Fractional calculus is used to describe the memory and genetic properties of various materials and processes, provides a powerful tool, it has been used in many scientific and engineering fields, such as viscoelasticity, anomalous diffusion, fluid mechanics, biology, chemistry, acoustics, control theory, etc. In this way, fractional differential equations, which are class of integral-differential equations with singularity, naturally appear in applied research. The existence and uniqueness theorem for solutions of fractional ordinary differential equations has been proposed. For linear fractional differential equations, the commonly used integral transformation method, including Laplace transform, Fourier transform and Mellin transform, the analytical solution of the problem is obtained.

Research Methods
Solving methods of nonlinear fractional differential equations

There are two commonly used numerical algorithms for nonlinear fractional differential equations, they are the prediction-correction solution and the time-frequency domain conversion algorithm, respectively. The former method is a classical calculation method, which is a generalization of the Adama-Bashforth-Moulton method for solving first-order differential equations, it is widely used in practical fractional order calculations. The latter method includes approximation methods implemented by continuous fractionation, expansion and interpolation, and approximation methods implemented by curve fitting stage identification techniques.

Decomposition methods have been effectively used to solve linear or nonlinear fractional differential equations. The numerical format of the decomposition-based fractional differential equation is given in [1], the ADM-Pade approximation technique is also used in fractional differential equations, the Rach-Adomian-Meyers modified decomposition method is extended to solve nonlinear fractional differential equations. Other analytical and numerical methods for nonlinear fractional differential equations can be found in [2].

We consider the initial value problem of nonlinear fractional ordinary differential equations Table 1 Properties of independent variables: Dtλu(t)+f(u(t))=g(t),1<λ2 D_t^\lambda u\left(t \right) + f\left({u\left(t \right)} \right) = g\left(t \right),1 < \lambda \le 2 u(0)=Co,u(0)=C1 u\left(0 \right) = {C_o},u^{'}\left(0 \right) = {C_1} where f is an analytical nonlinear function and g(t) is the system input. Applying the fractional integral operator Jtλ J_t^\lambda to both sides of equation (1), we get: u(t)=C0+C1t+Jtλg(t)Jtλf(u(t)) u\left(t \right) = {C_0} + {C_1}t + J_t^\lambda g\left(t \right) - J_t^\lambda f\left({u\left(t \right)} \right)

Iterative Shanks transform

S1(0) S_1^{\left(0 \right)} S1(1) S_1^{\left(1 \right)} S1(2) S_1^{\left(2 \right)} S1(3) S_1^{\left(3 \right)}

S2(0) S_2^{\left(0 \right)} S2(0) S_2^{\left(0 \right)} S2(2) S_2^{\left(2 \right)}
S3(0) S_3^{\left(0 \right)} S3(1) S_3^{\left(1 \right)} S3(2) S_3^{\left(2 \right)}
S4(0) S_4^{\left(0 \right)} S4(1) S_4^{\left(1 \right)}
S5(0) S_5^{\left(0 \right)} S5(1) S_5^{\left(1 \right)}
S6(0) S_6^{\left(0 \right)}
S7(0) S_7^{\left(0 \right)}

We decompose the solution into u(t)=n=0un u\left(t \right) = \sum\limits_{n = 0}^\infty {{u_n}} , then, decomposition of Analytical Nonlinear Term Nu = f(u(t)) into Adomian Polynomial Series: f(u(t))=n=0An f\left({u\left(t \right)} \right) = \sum\limits_{n = 0}^\infty {{A_n}}

Substituting the solution and the decomposition of the nonlinear term into equation (3), we get: n=0un=C0+C1t+Jtλg(t)Jtλn=0un \sum\limits_{n = 0}^\infty {{u_n}} \, = {C_0} + {C_1}t + J_t^\lambda g\left(t \right) - J_t^\lambda \sum\limits_{n = 0}^\infty {{u_n}}

From this we obtain the recursive form of the solution components: u0=C0+C1t+Jtλg(t) {u_0} = {C_0} + {C_1}t + J_t^\lambda g\left(t \right) un+1=JtλAn,n0 {u_{n + 1}} = - J_t^\lambda {A_n},n \ge 0

Or apply the recursive format modified by Wazwaz: u0=C0 {u_0} = {C_0} u1=C1t+Jtλg0JtλA0 {u_1} = {C_1}t + J_t^\lambda {g_0} - J_t^\lambda {A_0} un+1=JtλgntnJtλAn,n1 {u_{n + 1}} = J_t^\lambda {g_n}{t^n} - J_t^\lambda {A_n},n \ge 1

Here we decompose the system input as: g(t)=n=0gntn g\left(t \right) = \sum\limits_{n = 0}^\infty {{g_n}{t^n}}

The approximate solution for term n is Φn(t)=k=0n1uk \Phi n\left(t \right) = \sum\limits_{k = 0}^{n - 1} {{u_k}} .

Next we consider the Rach-Adomian Meyers modified decomposition method. For initial value problems (1) and (2), if λ is a rational number, λ = p / q, where p and q are co-prime positive integers, the system input g(t) can be expressed in the form of a generalized power series: g(t)=n=0gntn/p g\left(t \right) = \sum\limits_{n = 0}^\infty {gn{t^{n/p}}}

We note that every function g(t) parsed at point t = 0 can be expressed in the form of Eq. (12). We decompose the solution into a generalized power series: u(t)=n=0antn/p u\left(t \right) = \sum\limits_{n = 0}^\infty {{a_n}{t^{n/p}}}

Pick: a0=C0,a1=ap1=0,ap=C1,ap+1=ap1=0 {a_0} = {C_0},{a_1} = \ldots {a_{p - 1}} = 0,{a_p} = {C_1},{a_{p + 1}} = \ldots {a_{p - 1}} = 0

In this way (13) satisfies the initial value condition (2), then, the nonlinear term is written as: f(u(t))=n=0Antn/p f\left({u\left(t \right)} \right) = \sum\limits_{n = 0}^\infty {{A_n}{t^{n/p}}}

Calculating the fractional derivatives item by item, we have: Dtλu(t)=n=qanΓ(np+1)Γ(npλ+1)tnpλ D_t^\lambda u\left(t \right) = \sum\limits_{n = q}^\infty {{a_n}{{\Gamma \left({{n \over p} + 1} \right)} \over {\Gamma \left({{n \over p} - \lambda + 1} \right)}}{t^{{n \over p} - \lambda}}}

Substituting Equations (12), (15) and (16) into Equation (1), and comparing the coefficients of the same power, we obtain the recurrence format of the coefficient an as: an+qΓ(n+qp+1)Γ(np+1)+An=gn,n=0,1,2, {a_{n + q}}{{\Gamma \left({{{n + q} \over p} + 1} \right)} \over {\Gamma \left({{n \over p} + 1} \right)}} + {A_n} = {g_n},n = 0,1,2, \ldots

The approximate solution for term n is: Φn(t)=k=0n1aktk/p \Phi n\left(t \right) = \sum\limits_{k = 0}^{n - 1} {{a_k}{t^{k/p}}}

Accelerated Convergence Techniques are used to accelerate the convergence of sequences or series, and even extend the domain of convergence. In this article, we use iterative Shanks, for example, we obtain the approximate sequence Φ1(t), Φ2(t), …, Φ7(t). First, we denote Sn(0)=Φn(t) S_n^{\left(0 \right)} = {\Phi _n}\left(t \right) , and then, transform: Sn(k)=Sn(k1)Sn+2(k1)(Sn+2(k1))2Sn(k1)+Sn+2(k1)2Sn+1(k1) S_n^{\left(k \right)} = {{S_n^{\left({k - 1} \right)}S_{n + 2}^{\left({k - 1} \right)} - {{\left({S_{n + 2}^{\left({k - 1} \right)}} \right)}^2}} \over {S_n^{\left({k - 1} \right)} + S_{n + 2}^{\left({k - 1} \right)} - 2S_{n + 1}^{\left({k - 1} \right)}}}

Table 1 shows the transformation process, and S1(3) S_1^{\left(3 \right)} in the table is the result after transformation. We express: Shanks[1,2,7]=IST{Φ1(t),Φ2(t),,Φ7(t)}:=S1(3) Shanks\left[{1,2, \ldots 7} \right] = IST\left\{{{\Phi _1}\left(t \right),{\Phi _2}\left(t \right), \ldots,{\Phi _7}\left(t \right)} \right\}: = S_1^{\left(3 \right)}

The basic algorithm of computer artificial intelligence network

Artificial intelligence neural network includes input layer, hidden layer and output layer, each layer is composed of multiple neuron nodes. The hidden layer can be one layer or multiple layers. Although the hidden layer is not connected to the outside world, its state is crucial, to a considerable extent, it directly affects the mapping relationship between input and output. Many scholars have proved that the three-layer neural network that only contains the input layer, the hidden layer and the output layer already has a strong approximation ability [3].

The learning of artificial intelligence neural network belongs to supervised learning, the learning process consists of two parts: signal forward propagation and error back propagation. The basic training process is to input sample data from the input layer, and then input it to the output layer after implicit, and finally output by the output layer. Both the hidden layer and the output layer have differentiable excitation functions, and the output of each layer of neurons will only affect the output of the next layer of neurons. In the process of data forward propagation, the weights and thresholds of the network will not change. If the output is different from the expected value, the error is back-propagated. The neural network adjusts the connection weights of each layer through the error, and so on, until the error meets the requirements.

The sample data is input from the input layer to the hidden layer, and the input of a single node t of the hidden layer is: nett=s=1mWstxsθt,t=1,2,,p {net}_t = \sum\limits_{s = 1}^m {{W_{st}}{x_s} - {\theta _t},t = 1,2, \ldots,p}

The output of the hidden layer node t is: Ot=f(met),t=1,2,,p {O_t} = f\left({met} \right),t = 1,\,2, \ldots,p where f(met) is the node function or activation function of the hidden layer. Generally, different functions are selected according to the actual situation of the sample, such as linear function, hyperbolic tangent function, etc.

The data is then input from the hidden layer to the output layer, and the input of a single node k of the output layer is: yik=t=1pWtkOtθk,k=1,2,,n y{i_k} = \sum\limits_{t = 1}^p {{W_{tk}}{O_t} - {\theta _k},\,k = 1,2, \ldots,n}

The output of the output layer node k is: yok=f(yik),k=1,2,,n y{o_k} = f\left({y{i_k}} \right),k = 1,2, \ldots,n

Where f(yik) is the node function of the output layer.

In the initial stage, there is an error between the actual output value of the network and the expected value. In actual data training, the calculation formula of error usually adopts the square error formula, namely formula (25). E=12k=1n(ykyok)2 E = {1 \over 2}\sum\limits_{k = 1}^n {{{\left({{y_k} - y{o_k}} \right)}^2}} where y is the expected output value and yk is the expected output value of the output neuron k. The error here refers to the sum of the errors of all neurons in the output layer.

The final stable state of the artificial intelligence neural network is that the actual output value is the same or infinitely close to the expected value. Therefore, in the training of the data, it is necessary to constantly modify the connection weights and thresholds of the neural network. The weight correction formula of the neural network adopts the gradient descent method, and its essence is a simple static optimization algorithm of steepest descent [4]. Wst=WstuEWst {W_{st}} = {W_{st}} - u \cdot {{\partial E} \over {\partial {W_{st}}}} Wtk=WtkuEWtk {W_{tk}} = {W_{tk}} - u \cdot {{\partial E} \over {\partial {W_{tk}}}}

Where u is the weight correction coefficient, that is, the learning rate. The size of the learning rate affects the convergence speed of the algorithm. If the learning rate is too small, the network will converge very slowly. If the learning rate is too large, the network will oscillate and fail to converge. In the classical network algorithm, in each iteration, an accurate one-dimensional search is required to obtain the optimal iterative step size. However, one-dimensional search requires multiple calculations, which consumes a lot of computing time and is difficult to program and apply. Therefore, one-dimensional search is generally not used to optimize the learning rate, but a certain value of 0-1 is used.

The size of the initial weights also affects the learning rate. Usually the initial weights will choose positive and negative decimals near 0, preferably random and uniform distribution, which can expand the search range of the optimal weights.

The partial derivative of the error E to the weight W in equations (2627) is also called the weight correction of one iteration. The weight correction amount can be decomposed into a relationship with the input of the hidden layer or the input of the output layer, and easy to calculate fractions. Substitute equations (21) and (22) into the weight corrections of the input layer and hidden layer weights Wst, and obtain the following formula: EWst=Enett=nettWst {{\partial E} \over {\partial {W_{st}}}} = {{\partial E} \over {\partial ne{t_t}}} = {{\partial ne{t_t}} \over {\partial {W_{st}}}}

In MATLAB, run the integer order artificial intelligence neural network program, the number of samples is 30, and get Figure 1 and Figure 2, the abscissa is the number of iteration steps, the ordinate is the training error, P is the number of neurons in the hidden layer, u is the learning rate. Figure 1 shows that the artificial intelligence neural network has the same number of neurons in the hidden layer, under the conditions of different learning rates, training on the same set of data. Figure 2 shows that the artificial intelligence neural network has the same learning rate, under the condition that the number of neurons in the hidden layer is different, training on the same set of data [5]. It can be seen from Figure 1 that the smaller the learning rate, the slower the network convergence speed. Figure 2 shows that there are too many neurons in the hidden layer, which makes the convergence rate very slow, the convergence is very poor, but it is not that the fewer the number of neurons, the smaller the convergence error of the network. Only with the appropriate number of neurons, will get better convergence and less convergence error.

Figure 1

The training error diagram of integer-order artificial intelligence neural network with different learning rates

Figure 2

The training error diagram of integer-order artificial intelligence neural network with different number of hidden layer neurons

Training of Fractional Artificial Intelligence Network Based on Nonlinear Fractional Function

In this section, the node function of the neural network selects the Sigmoid function, because the output of this function is close to the signal output form of biological neurons, it can simulate the nonlinear characteristics of biological neurons. Moreover, the nonlinear characteristics of the sigmoid function, it can also enhance the nonlinear mapping ability of neural network [8].

The mathematical expression of the sigmoid function is: f(x)=11+ex f\left(x \right) = {1 \over {1 + {e^{- x}}}}

The first derivative expression of the sigmoid function is: f(x)=1(1+ex)ex(1)=1(1+ex)ex(1+ex)=f(x)[1f(x)] f^{'}\left(x \right) = {{- 1} \over {\left({1 + {e^{- x}}} \right)}}{e^{- x}}\left({- 1} \right) = {1 \over {\left({1 + {e^{- x}}} \right)}}{{{e^{- x}}} \over {\left({1 + {e^{- x}}} \right)}} = f\left(x \right)\left[{1 - f\left(x \right)} \right]

Considering the hope that the artificial intelligence neural network itself, it can transform the derivative order according to the change of the convergence error, and realize the global self-adaptation, that is, construct the self-adaptive artificial intelligence neural network. When the error between the previous iteration and the next iteration is quite different, the fractional order takes the smaller value, in order to ensure that the network can learn at a relatively fast speed, and in order to prevent network training saturation, that is, the error does not drop but rises, and the magnitude of the adjustment of the fractional order before and after is slightly larger [9, but considering the training process of artificial intelligence neural network, by adjusting the two parameters α, β, the training state of the network is changed. Therefore, the node functions of the hidden layer and the output layer can use the following functions: f(x)=11+1/Ea,β(x) f\left(x \right) = {1 \over {1 + 1/{E_{a,\beta}}\left(x \right)}}

When β is the same and the value of α is less than 1, the output value of the function changes faster. The larger the value of α is, the slower the output value of the number rises, and gradually becomes flat. When α is equal to 2, the α function takes a number less than 1, its function graph is similar to the Sigmoid function graph, and the value interval is similar, and the upper bound of the function within (0,1) tends to 1.

When α is the same and takes a number less than 1, no matter what value β takes, function graphs are all similar to Sigmoid function graphs. Only when the value of p is small, the function value will be less than 0, and the others are all within (0,1). When α is equal to 2, X takes a value greater than 0, the function value does not change much before and after, and tends to a smooth straight line. From the comparison of the curves in Figure 3, it can be found that, if both α and β take values greater than 1 at the same time, the change of the function value of f(x) is relatively stable, not much difference before and after. Only one of α and β is greater than 1 and the other is less than 1, the function value will change greatly. When α takes a number less than 1 or β takes a number greater than 1, the function f(x) is similar to the Sigmoid function. In practical application, the relationship between parameters and the relationship with function values should be considered for parameter adjustment [10].

Figure 3

Function diagram of f(x)

Taking equation (26) as the node function of the fractional artificial neural network, by changing the parameters α and β, observing the effect of parameter changes on network training is shown, the data in the table is the convergence error for the last iteration.

When α and β are less than 1, the error curves of different β are not very different, the final convergence is not very good, and the convergence error value is still relatively large. It can be seen from this that the smaller the value of β, the faster the curve declines, that is, the faster the error between the actual output value and the expected value changes. Similar to the previous conclusion, the values of α and β should not be too large, otherwise the convergence of the network will be poor, the purpose of training is not achieved.

Through the training of fractional-order artificial intelligence neural network based on sigmoid function, the effects of fractional order, learning rate and the number of neurons in the hidden layer on its training are summarized. Comparing the training results of the fractional-order artificial intelligence neural network with the integer-order artificial intelligence network, summarize the advantages and disadvantages of each. On this basis, a variable-order iterative algorithm is proposed, that is, the switching between integer order and fractional order, and the fractional order adaptively adjusts the two algorithms according to the error before and after. Through the actual network training results, the advantages of the algorithm are obtained, and a fractional-order artificial intelligence network based on this function is constructed for training, summarize the effect of two parameters in the function on network training, and make a simple comparison with the network based on the sigmoid function.

Conclusion

The author mainly studies the algorithm and application of computer artificial intelligence neural network based on fractional calculus theory. The author introduced fractional order theory into the algorithm of computer artificial intelligence neural network, the fractional-order artificial network algorithm is derived from the two fractional-order definitions, and the neural network is trained by using a specific data sample set, and compared with the training results of the integer order neural network, the simulation results show that, the fractional-order computer artificial intelligence neural network has a faster training speed, but it is slightly insufficient in the convergence accuracy. Therefore, a variable-order iterative learning algorithm is proposed and applied to the training of neural networks, the results show that, the feasibility of this algorithm and its advantages in convergence speed and convergence accuracy.

However, the latter variable-order iterative algorithm proposed by the author, it is to adjust the fractional order adaptively according to the ratio of the error before and after. Although the training effect of artificial intelligence network is good, it is too simple, and for different sample sets, some data in the algorithm may need to be modified. Therefore, it is necessary to find a better and more widely used order adjustment algorithm.

Figure 1

The training error diagram of integer-order artificial intelligence neural network with different learning rates
The training error diagram of integer-order artificial intelligence neural network with different learning rates

Figure 2

The training error diagram of integer-order artificial intelligence neural network with different number of hidden layer neurons
The training error diagram of integer-order artificial intelligence neural network with different number of hidden layer neurons

Figure 3

Function diagram of f(x)
Function diagram of f(x)

Iterative Shanks transform

S1(0) S_1^{\left(0 \right)} S1(1) S_1^{\left(1 \right)} S1(2) S_1^{\left(2 \right)} S1(3) S_1^{\left(3 \right)}

S2(0) S_2^{\left(0 \right)} S2(0) S_2^{\left(0 \right)} S2(2) S_2^{\left(2 \right)}
S3(0) S_3^{\left(0 \right)} S3(1) S_3^{\left(1 \right)} S3(2) S_3^{\left(2 \right)}
S4(0) S_4^{\left(0 \right)} S4(1) S_4^{\left(1 \right)}
S5(0) S_5^{\left(0 \right)} S5(1) S_5^{\left(1 \right)}
S6(0) S_6^{\left(0 \right)}
S7(0) S_7^{\left(0 \right)}

Cao G. Research on the application of artificial intelligence algorithm in logistics distribution route optimization[J]. Paper Asia, 2018, 34(5):35–38. CaoG Research on the application of artificial intelligence algorithm in logistics distribution route optimization[J] Paper Asia 2018 34 5 35 38 Search in Google Scholar

Wang Y L, Tian D, Bao S H, et al. Using the iterative reproducing kernel method for solving a class of nonlinear fractional differential equations[J]. International Journal of Computer Mathematics, 2017, 94(9–12):1–19. WangY L TianD BaoS H Using the iterative reproducing kernel method for solving a class of nonlinear fractional differential equations[J] International Journal of Computer Mathematics 2017 94 9–12 1 19 10.1080/00207160.2017.1284318 Search in Google Scholar

Wang Z, Fang B. Correction to: Application of combined kernel function artificial intelligence algorithm in mobile communication network security authentication mechanism[J]. The Journal of Supercomputing, 2019, 75(9):5965–5965. WangZ FangB Correction to: Application of combined kernel function artificial intelligence algorithm in mobile communication network security authentication mechanism[J] The Journal of Supercomputing 2019 75 9 5965 5965 10.1007/s11227-019-02949-9 Search in Google Scholar

Wang Z, Peterson J L, Rea C, et al. Special Issue on Machine Learning, Data Science, and Artificial Intelligence in Plasma Research[J]. IEEE Transactions on Plasma Science, 2020, 48(1):1–2. WangZ PetersonJ L ReaC Special Issue on Machine Learning, Data Science, and Artificial Intelligence in Plasma Research[J] IEEE Transactions on Plasma Science 2020 48 1 1 2 10.1109/TPS.2019.2961571 Search in Google Scholar

Varakantham P, An B, Low B, et al. Artificial Intelligence Research in Singapore: Assisting the Development of a Smart Nation[J]. Ai Magazine, 2017, 38(3):102–105. VarakanthamP AnB LowB Artificial Intelligence Research in Singapore: Assisting the Development of a Smart Nation[J] Ai Magazine 2017 38 3 102 105 10.1609/aimag.v38i3.2749 Search in Google Scholar

Wang S. Research on the application of artificial intelligence in sports meeting management system[J]. Revista de la Facultad de Ingenieria, 2017, 32(16):344–350. WangS Research on the application of artificial intelligence in sports meeting management system[J] Revista de la Facultad de Ingenieria 2017 32 16 344 350 Search in Google Scholar

Liu H, Yu L, Ruan C, et al. Tracking Air-to-Air Missile Using Proportional Navigation Model with Genetic Algorithm Particle Filter[J]. Mathematical Problems in Engineering, 2016, 2016(9):1–11. LiuH YuL RuanC Tracking Air-to-Air Missile Using Proportional Navigation Model with Genetic Algorithm Particle Filter[J] Mathematical Problems in Engineering 2016 2016 9 1 11 10.1155/2016/3921608 Search in Google Scholar

Cong N D, Doan T S, Siegmund S, et al. On stable manifolds for fractional differential equations in high-dimensional spaces[J]. Nonlinear Dynamics, 2016, 86(3):1885–1894. CongN D DoanT S SiegmundS On stable manifolds for fractional differential equations in high-dimensional spaces[J] Nonlinear Dynamics 2016 86 3 1885 1894 10.1007/s11071-016-3002-z Search in Google Scholar

Ziane D, Cherif M H, Cattani C, et al. Yang-Laplace Decomposition Method for Nonlinear System of Local Fractional Partial Differential Equations[J]. Applied Mathematics and Nonlinear Sciences, 2019, 4(2):489–502. ZianeD CherifM H CattaniC Yang-Laplace Decomposition Method for Nonlinear System of Local Fractional Partial Differential Equations[J] Applied Mathematics and Nonlinear Sciences 2019 4 2 489 502 10.2478/AMNS.2019.2.00046 Search in Google Scholar

Kaladhar K. kkr.nitpy@gmail.com Komuraiah E. Reddy K. Madhusudhan Department of Mathematics, National Institute of Technology, Puducherry, 609609, India. Soret and Dufour effects on chemically reacting mixed convection flow in an annulus with Navier slip and convective boundary conditions[J]. Applied Mathematics and Nonlinear Sciences, 2019, 4(2):475–488. KaladharK kkr.nitpy@gmail.com Komuraiah E. Reddy K. Madhusudhan Department of Mathematics, National Institute of Technology, Puducherry, 609609, India. Soret and Dufour effects on chemically reacting mixed convection flow in an annulus with Navier slip and convective boundary conditions[J] Applied Mathematics and Nonlinear Sciences 2019 4 2 475 488 10.2478/AMNS.2019.2.00045 Search in Google Scholar

Zhang Qingli Zhangql1@bjou.edu.cn Academic Affairs Department, Beijing Open University, Beijing, 100098, China. Fully discrete convergence analysis of non-linear hyperbolic equations based on finite element analysis[J]. Applied Mathematics and Nonlinear Sciences, 2019, 4(2):433–444. ZhangQingli Zhangql1@bjou.edu.cn Academic Affairs Department, Beijing Open University, Beijing, 100098, China. Fully discrete convergence analysis of non-linear hyperbolic equations based on finite element analysis[J] Applied Mathematics and Nonlinear Sciences 2019 4 2 433 444 10.2478/AMNS.2019.2.00041 Search in Google Scholar

Articles recommandés par Trend MD

Planifiez votre conférence à distance avec Sciendo