Open Access

A new computational algorithm for the solution of second order initial value problems in ordinary differential equations

   | Oct 03, 2018

Cite

Introduction

Many phenomena that occur in chemical, biological, engineering, physical and social sciences can be modelled mathematically in the form of either ordinary or partial differential equations. However it is difficult to obtain exact solution for these differential equations especially if it is nonlinear, by analytical means.So we consider an approximate solution to these problems.There are numerous ways by which an approximate solution can be constructed.In numerical analysis a concept of approximation play very important role. Thus solving these practical problems which modelled as differential equation approximately, is one of the main preoccupations in numerical analysis.

Consider second order initial value problems in ordinary differential equations of the form

y(2)(x)=f(x,y,y),x[a,b]Randy(x),y(x),f(x,y,y)R$$\begin{array}{} \displaystyle y^{(2)}(x) =f(x,y,y'),\quad x\in[a, b]\subset R \quad and\quad y(x),y'(x),f(x,y,y')\in R \end{array}$$

subject to initial conditions

y(a)=αandy(a)=β,$$\begin{array}{} \displaystyle \qquad \qquad \qquad \quad \qquad y(a)=\, \alpha\quad and\quad y'(a)=\, \beta, \end{array}$$

In the literature, problems of the form(1.1) are conventionally solved by reducing the differential system to first order equations. Some eminent authors have contributed in this specific area of research [1,2,3,4,11]. Another approach to investigate the solution of such problems were and referred to as shooting method either simple or multiple [8].In recent years researchers[5,6] applied a nonstandard method and obtained competitive results to those obtained with other method. So, much research have reported on the numerical integration of initial value problems in literature, many of them are excellent work.But a concept to develop a new algorithm to solve equation (1.1) can not be over emphasized.

In this article, we develop a new single step algorithm capable of solving equations of the form (1.1).The similar algorithm was first reported [7] in study of first order initial value problems.Having seen the performance of the algorithm for solution of first order initial value problems, we are motivated and challenged to investigate what will happen if a similar idea is used to derive an algorithm for solution of second order initial value problems.

The existence and uniqueness of the solution to initial value problem(1.1) is assumed.Further we assume that problems (1.1) is well posed with continuous derivatives and that the solution depends differentially on the initial conditions.The specific assumption on f(x, y, y’) to ensure existence and uniqueness will not be considered[8,9,10].

This paper is divided into five sections.Section 2 deals with the derivation and development of the algorithm while truncation error and convergence of the algorithm are developed in Section 3.The stability of the algorithm is discussed in section 4 while numerical experiments on four model problems are presented in section 5.

Development of Algorithm

We define N, the finite number of the nodal points of the interval [a, b], in which the solution of the problem (1.1) is desired as

xj=a+jh,j=0,1,2,N$$\begin{array}{} \displaystyle \quad x_j=a+jh, \, \, j=0,1,2\ldots, N \end{array}$$

where the terms in right side of expression (2.2) are defined as, the step length h=(ba)N,andxN=b$\begin{array}{} \displaystyle h=\dfrac{(b-a)}{N},\quad and \quad x_N=b \end{array}$

Suppose we have to determine a number yj, which is numerical approximation to the value of the theoretical solution y(x) of problem(1.1) at the nodal point xj, j = 1,2…, N and other similarly notations like fj defined as f(xj,yj,yj$\begin{array}{} \displaystyle y'_{j} \end{array}$). Following the ideas in [1,6], assuming the local assumption that the theoretical solution y(x) to the initial value problem (1.1) can be locally represented in the interval [xj,xj+1] by the interpolating function

F(x)=a0+a1.x+a2.x2+a3.ex3$$\begin{array}{} \displaystyle {F(x)=a_{0}+a_{1}.x+a_{2}.x^{2}+a_{3}.e^{x^{3}}} \end{array}$$

where a0,a1,a2,and a3 are undetermined coefficients.

To determine these undetermined coefficients, let impose these following conditions.

The interpolating function and its first derivative w.r.t. x must coincide with y(x) andy′(x) the theoretical solution and derivative of solution w.r.t. xof the problem (1.1) at x = xj and x = xj+1 i.e.

F(xj)=y(xj)andF(xj+1)=y(xj+1)F(xj)=y(xj)andF(xj+1)=y(xj+1)$$\begin{array}{} \displaystyle ~~~~{F(x_{j})=y(x_{j})} \quad and \quad {F(x_{j+1})=y(x_{j+1})} \nonumber \\ \displaystyle {F'(x_{j})=y'(x_{j})} \quad and \quad {F'(x_{j+1})=y'(x_{j+1})} \end{array}$$

The second and third derivatives w.r.t x, of the interpolating function respectively coincide with f(x,y,y′) and derivative of f(x,y,y′) w.r.t. x at x = xj i.e.

F(2)(xj)=fjandF(3)(xj)=fj$$\begin{array}{} \displaystyle {F^{(2)}(x_{j})=f_{j}} \quad and \quad {F^{(3)}(x_{j})=f'_{j}} \end{array}$$

Thus, from assumptions (2.4,2.5), we will get

a0+a1.xj+a2.xj2+a3.exj3=α  a1+2.a2.xj+3.a3.xj2.exj3=β2.a2+3.a3.xj.(2+3.xj3).exj3=fj   3.a3.(2+18.xj3+9.xj6).exj3=fj$$\begin{array}{} \displaystyle \,\,{a_{0}+a_{1}.x_{j}+a_{2}.x^{2}_{j}+a_{3}.e^{x^{3}_{j}}=\alpha} \\ \displaystyle ~~\,\,{a_{1}+2.a_{2}.x_{j}+3.a_{3}.x^{2}_{j}.e^{x^{3}_{j}}=\beta} \\ \displaystyle {2.a_{2}+3.a_{3}.x_{j}.(2+3.x^{3}_{j}).e^{x^{3}_{j}}=f_{j} }\\ \displaystyle ~~~{3.a_{3}.(2+18.x^{3}_{j}+9.x^{6}_{j}).e^{x^{3}_{j}}=f'_{j}} \end{array}$$

Solving the system of equation (2.6) for a0,a1,…., we will obtain

a1=βxj.fj+(xj2+3xj5)fj2+18.xj3+9.xj6a2=12(fj(2xj+3xj4)fj2+18.xj3+9.xj6)a3=fj.exj33.(2+18.xj3+9.xj6)$$\begin{array}{} \displaystyle {a_{1}=\beta-x_{j}.f_{j}+\frac{(x^{2}_{j}+3x^{5}_{j})f'_{j}}{2+18.x^{3}_{j}+9.x^{6}_{j}}} \\ \displaystyle \quad~~ {a_{2}=\frac{1}{2}{(f_{j}-\frac{(2x_{j}+3x^{4}_{j})f'_{j}}{2+18.x^{3}_{j}+9.x^{6}_{j}})}} \\ \displaystyle \qquad\quad~~{a_{3}=\frac{f'_{j}.e^{-x^{3}_{j}}}{3.(2+18.x^{3}_{j}+9.x^{6}_{j})}} \end{array}$$

From equation (1.4) we have

yj+1yj=a1(xj+1xj)+a2(xj+12xj2)+a3(exj+13exj3)yj+1yj=2a2(xj+1xj)+3a3(xj+12exj+13xj2exj3)$$\begin{array}{} \displaystyle {y_{j+1}-y_{j}=a_{1}(x_{j+1}-x_{j})+a_{2}(x^{2}_{j+1}-x^{2}_{j})+a_{3}(e^{x^{3}_{j+1}}-e^{x^{3}_{j}})}\\ \displaystyle \qquad\quad {y'_{j+1}-y'_{j}=2a_{2}(x_{j+1}-x_{j})+3a_{3}(x^{2}_{j+1}e^{x^{3}_{j+1}}-x^{2}_{j}e^{x^{3}_{j}})} \end{array}$$

From equation (2.2) and substituting the values of a1, a2anda3 from (2.7), in equation (2.8), we have

yj+1=yj+hβxj.fj+(xj2+3xj5)fj2+18.xj3+9.xj6+h(2xj+h)2fj(2xj+3xj4)fj2+18.xj3+9.xj6+(e3xjh(xj+h)+h31)fj2+18.xj3+9.xj6$$\begin{array}{} \displaystyle y_{j+1}=y_{j}+h\left(\beta-x_{j}.f_{j}+\frac{(x^{2}_{j}+3x^{5}_{j})f'_{j}}{2+18.x^{3}_{j}+9.x^{6}_{j}}\right) +\frac{h(2x_{j}+h)}{2}{\left(f_{j}-\frac{(2x_{j}+3x^{4}_{j})f'_{j}}{2+18.x^{3}_{j}+9.x^{6}_{j}}\right)} +\frac{(e^{3x_{j}h(x_{j}+h)+h^{3}}-1)f'_{j}}{2+18.x^{3}_{j}+9.x^{6}_{j}} \end{array}$$

yj+1=yj+hfjh(2xj+3xj4)fj2+18.xj3+9.xj6+((xj+h)2e3xjh(xj+h)+h3xj2)fj2+18.xj3+9.xj6$$\begin{array}{} \displaystyle {y'_{j+1}=y'_{j}+hf_{j}-\frac{h(2x_{j}+3x^{4}_{j})f'_{j}}{2+18.x^{3}_{j}+9.x^{6}_{j}}+ \quad \frac{((x_{j}+h)^{2}e^{3x_{j}h(x_{j}+h)+h^{3}}-x^{2}_{j})f'_{j}}{2+18.x^{3}_{j}+9.x^{6}_{j}} } \end{array}$$

We replace fj$\begin{array}{} \displaystyle f'_{j} \end{array} $ by its first order difference approximation in (2.9) i.e.

fj=fj+1fjh$$\begin{array}{} \displaystyle {f'_{j}=\frac{f_{j+1}-f_{j}}{h}} \end{array}$$

So we will obtain our single step implicit algorithm.

Thus we have developed single step implicit algorithm of the form

yj+1=yj+h.ϕ(h,fj,fj)yj+1=yj+h.φ(h,fj,fj)$$\begin{array}{} \displaystyle {y_{j+1}=y_{j}+h.\phi{(h,f_{j},f'_{j})} } \\ \displaystyle {y'_{j+1}=y'_{j}+h.\varphi{(h,f_{j},f'_{j})}} \end{array}$$

where ϕandφ are increment functions.These increment functions depend on h, fjandfj$\begin{array}{} \displaystyle f'_j \end{array}$. If the system of equations (2.10)are linear generally solved by iterative method otherwise Newton Raphson method.After application of these algorithm we have taken as an approximation for the exact solution and derivative of solution of the problem (1.1) at xj+1, the values yj+1andyj+1$\begin{array}{} \displaystyle y'_{j+1} \end{array}$ given by (2.9). Repeating the procedure along the nodes on the interval of integration, we will obtain a discrete solution and derivative of the solution for the problem.In the numerical section, we will see that the performance of proposed algorithm for a variety of second order initial value problems.

The Local truncation error and Convergence

In this section, we consider the error associated to the proposed algorithm (2.9). Let the local truncation error Tn+1, defined as in[13],

Tn+1=y(xn+h)yn+1$$\begin{array}{} \displaystyle {T_{n+1}=y(x_n+h)-y_{n+1}} \end{array}$$

Substituting the value of yn+1 from (2.9) in (3.11), and expanding y(xn+h) in Taylor series about point xn, so we have

Tn+1=h(xn2+3xn5)(yn(3)1)2+18.xn3+9.xn6+O(h2)=h2(xn2+3xn5)(yn(3)1)(1+9.xn3+92.xn6)1+O(h2)<h2(yn(3)1)xn2+O(h2)|Tn+1|<hb22|yn(3)|$$\begin{array}{} \displaystyle \qquad\qquad~{T_{n+1}=\frac{h(x^{2}_{n}+3x^{5}_{n})(y^{(3)}_{n}-1)}{2+18.x^{3}_{n}+9.x^{6}_{n}}+O(h^{2})\quad \quad }\\ \displaystyle {=\frac{h}{2}(x^{2}_{n}+3x^{5}_{n})(y^{(3)}_{n}-1)(1+9.x^{3}_{n}+\frac{9}{2}.x^{6}_{n})^{-1}+O(h^{2})}\\ \displaystyle \qquad {\qquad \qquad\qquad\quad \lt \frac{h}{2}(y^{(3)}_{n}-1)x^{2}_{n}+O(h^{2})\quad \quad \quad }\\ \displaystyle \qquad \qquad \qquad\qquad\qquad\quad~~~ {|T_{n+1}| \lt \frac{hb^{2}}{2}|y^{(3)}_{n}|\quad \quad \quad \ } \end{array}$$

where b = max (xn) in [a,b]. Thus local truncation error Tn+1 is bounded.We know x0 and y(x0) exactly then using algorithm (2.9), we can compute yn+1, n = 0,1,2,3,…..N with maximum error hb22yn(3)$\begin{array}{} \displaystyle {\frac{hb^{2}}{2}y^{(3)}_{n}} \end{array}$. The error arising from all the initial conditions tends to zero as h → 0 i.e. for large N.Similarly we can find maximum error in second algorithm of (2.9), for computation of derivative of solution. Thus we have concluded that method (2.9) is convergent for large N.

Stability property

To discuss stability property of the algorithm (2.9), we follow the same method as discussed in [12,13]. Consider the Dahlquist test equation for stability,

y(x)=λ2y(x),x[a,b]andλR$$\begin{array}{} \displaystyle { y''(x)={\it \lambda}^{2} y(x) ,\quad x\in[a,b] \quad and \quad \quad \quad {\it \lambda}\in R } \end{array}$$

subject to initial conditions y(x0) = y0, y′(x0) = λy0. Apply the method (2.9) to this test equation, we obtained a finite difference equation, assuming the negligible contribution of the terms with O(h2) and O(xn2$\begin{array}{} \displaystyle x^{2}_{n} \end{array}$) in the expression,

yn+1=(1+hλ)yn=ehλynE(hλ)yn,n=0,1,2,.......N$$\begin{array}{} \displaystyle y_{n+1}=(1+h{\it \lambda})y_n \\ \displaystyle \qquad\quad{=e^{h{\it \lambda}}y_n }\\ \displaystyle \qquad\quad{\simeq E({h{\it \lambda}})y_n \quad ,\quad n=0,1,2,.......N} \end{array}$$

where the stability function E() is an approximation to e. For the alogorithm (2.9) to be converge

|E(hλ)|<1$$\begin{array}{} \displaystyle {|E(h{\it \lambda})| \lt 1 } \end{array}$$

Solving inequality (4.14), thus we obtained the corresponding interval of absolute stability of (2.9) is (–2,0).

Numerical experiment

In this section, four numerical examples linear and nonlinear were considered, to illustrate our algorithm (2.9) and to demonstrate computationally its efficiency and accuracy.In tables, we have shown maximum absolute error computed on the nodal points in the interval of integration for these examples in their solution and derivative of solution. Let yi and yi$\begin{array}{} \displaystyle y'_i \end{array}$ are the numbers calculated by (2.9) respectively which are an approximate value of the theoretical solution y(x)and derivative of solution i.e.y′(x) at the point x = xi. Maximum absolute error is calculated in both solution and derivative of solution by

MAE(y)=maxi|y(xi)yi|MAE(y)=maxi|y(xi)yi|,i=1,2,,N.$$\begin{array}{} \displaystyle \qquad\qquad\qquad\qquad~{MAE(y)=\max\limits_i |y(x_i )- y_i| }\\ \displaystyle { MAE(y')=\max\limits_i |{y'(x_i)- y'_i}| ,\quad i=1,2,\ldots,N .} \end{array}$$

All computations in the examples consider were performed in the GNU FORTRAN environment version -99 compiler(2.95 of gcc) running on a MS Window 2000 professional operating system.

Example 5.1

Consider the initial value problem,

y(x)=40x3+400x4y(x)$$\begin{array}{} \displaystyle {y''(x)=\left(\frac{-40}{x^3} +\frac{400}{x^4}\right)y(x)} \end{array}$$

The exact solution in [1,2] is y(x) = e(20x)$\begin{array}{} \displaystyle e^{(\frac{-20}{x})} \end{array}$. The maximum absolute error in y(x) and y’(x) are given Table 1.

Maximum absolute error in y(x) = e(20x)$\begin{array}{} \displaystyle e^{(\frac{-20}{x})} \end{array}$ and y′(x) for Example 5.1.

MAEN
64128256512102420484096
y.5621729(-3).2748708(-3).1313720(-3).5965512(-4).2474944(-4).1009797(-4).4115111(-5)
y.5267575(-6).1289518(-6).3187597(-7).7879862(-8).1913576(-8).5345364(-9).1852914(-9)

Example 5.2

Consider nonlinear initial value problem

y(x)=6y2(x)$$\begin{array}{} \displaystyle y''(x)=6y^{2}{(x)} \end{array}$$

The exact solution in [0,1] is y(x) = (1 + x)–2. The maximum absolute error in y(x) and y′(x) are given Table 2.

Maximum absolute error in y(x) = (1 + x)–2 and y′(x) for Example 5.2.

MAEN
1282565121024204840968192
y.9100055(-2).5043587(-2).2747672(-2).1475068(-2).7822510(-3).4106779(-3).2136840(-3)
y.1660321(-1).8981451(-2).4802107(-2).2541303(-2).1330271(-2).6887614(-3).3566145(-3)

Example 5.3

Consider nonlinear initial value problem

y(x)=y3(x)y(x)y(x)$$\begin{array}{} \displaystyle y''(x)=y^3 (x)-y(x)y'(x) \end{array}$$

The exact solution in [1,2] is y(x) = (1 + x)–1. The maximum absolute error in y(x) and y′(x) are given in Table 3.

Maximum absolute error in y(x) = (1 + x)–1 and y′(x) for Example 5.3.

MAEN
1282565121024204840968192
y.1363998(-1).7091403(-2).3645122(-2).1860261(-2).9453296(-3).4789233(-3).2411603(-3)
y.6387859(-2).3583610(-2).1952946(-2).1042857(-2).5497336(-3).2868771(-3).1467168(-3)

Example 5.4

Consider nonlinear initial value problem

y(x)=y2(x)+12cos(x)sin4(x2)$$\begin{array}{} \displaystyle y''(x)=y^2(x)+\frac{1}{2}\cos(x)-sin^4 {(\frac{x}{2})} \end{array}$$

The exact solution in [0,1] is y(x) = sin2 (x2$\begin{array}{} \displaystyle \frac{x}{2} \end{array}$). The maximum absolute error in y(x) and y′(x) are given Table 4.

Maximum absolute error in y(x) = sin2(x2$\begin{array}{} \displaystyle \frac{x}{2} \end{array}$) and y′(x) for Example 5.4.

MAEN
1282565121024204840968192
y.3083482(-1).2080867(-1).1396590(-1).9322166(-2).6191766(-2).4095947(-2).2701349(-2)
y.2283364(-2).1416236(-2).8000433(-3).4311204(-3).2279877(-3).1192688(-3).6163130(-4)

Conclusion

In this paper, we have described a new method that is efficient, stable and convergent for solving second order initial value problems in ordinary differential equations.The implementation of the method is simple.The results we obtained for examples shows that method is computationally efficient and accurate. Our future works will deal with extension of the present method to solve higher order boundary value problems and improving its order of accuracy.

eISSN:
2444-8656
Language:
English
Publication timeframe:
2 times per year
Journal Subjects:
Life Sciences, other, Mathematics, Applied Mathematics, General Mathematics, Physics