This work is licensed under the Creative Commons Attribution 4.0 International License.
Introduction
The real-world problems that arise in mathematical modelling usually result in functional equations, such as partial differential equations, integral and integro-differential equations (IDEs), stochastic equations and others. In particular, integral equations arise in the fields of fluid mechanics, kinetics in chemistry, organic fashion, stable country physics, and many other domains. Unique, crucial equations arise, and these equations are difficult to solve analytically. Then the numerical methods are introduced such as finite difference methods, finite-element methods, boundary element methods, etc. There are advantages and disadvantages to each of these numerical techniques, and the quest for more versatile, user-friendly, precise, and less complicated numerical techniques is a never-ending endeavor. Integral equations can be settled using a variety of techniques, including Picard’s method, the Laplace transformation method, the Adomian decomposition technique, successive substitutions [1], and application to soliton theory [2,3,4,5]. Some of the recently trending wavelet methods are available in the literature concerning numerical solutions of integral and integro-differential equations, such as the Wavelet Galerkin method [6], Haar wavelet [7,8,9,10], Legendre wavelet [11], Hermite wavelet [12], Bernoulli wavelet [13, 14], Wavelet full-approximation scheme [15], Daubechies wavelet new transform method [16], and Biorthogonal Spline Wavelet Full-Approximation Transform method [17].
In this paper, we consider the nonlinear Fredholm integral equation of the form,
y(x)=f(x)+\int_{0}^{1}{}k(x,t)[y(t{{)]}^{m}}dt,\,0\le x,t\le 1.
Let y(x) be a unique solution of equation (1) which has to be determined. Where [y(x)]m,m > 1 is the nonlinear term, f (x) and the kernel is assumed to be in L2(R) over the interval 0 ≤ x,t ≤ 1.
In the field of graph theory, graph theoretical polynomial-based numerical methods have recently been used in the field of numerical analysis. The Hosoya polynomial is one of the most famous in graph theory. A graph polynomial is a graph invariant with polynomial values in algebraic graph theory. There are several graph theoretic polynomials found in literature [18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]. Among the polynomials utilized in the numerical approximation are the Hosoya polynomials. The first paper has been published [33] for the numerical solution of the linear Fredholm integral equation by using the Hosoya polynomial, whereas [34] is a comparative study of the numerical solution of the Fredholm integral equation by using Haar wavelet and Hosoya polynomial, and also applied for the delay differential equations [35]. In this paper, we applied the Hosoya polynomial method (HPM) for the numerical solution of nonlinear Fredholm integral equations, which reduces to the system of algebraic equations, then using the MATLAB tools, we obtained the required approximate solution. Illustrative applications demonstrate the efficiency of the Hosoya polynomial method, and those results are compared favourably with the corresponding exact solutions, and absolute errors give a more accurate solution than the existing methods shown in tables and figures.
The paper is structured as follows: In Section 2, some basic definitions and properties of the Hosoya polynomial of graphs and Function approximation are included. Section 3 is devoted to the matrix of Hosoya polynomial and Kernal. Section 4 presents the convergence and error analysis. Section 5 represents a description of the Hosoya polynomial method. In Section 6, we solve some illustrative applications and demonstrate the numerical results with high accuracy and efficiency using HPM. Section 7 provides the proposed method’s conclusion.
Hosoya polynomial and function approximation
In this work, we consider simple graph G and let V be a nonempty finite set having n vertices of graph Pn. Let X be a set with m number of unordered pairs of distinct vertices of set V. Thus, every pair, consider x = (u,v) of points in X is an edge joined by points v and u which are adjacent to each other. Consider v1,v2,⋯, vn the vertices of the graph a G. Let Pn be a path graph that has n vertices where vi,vi+1,i = 1,2,⋯, n are adjacent to each other. The number of edges present in path Pn is the length of Pn. In a connected graph G [18], each pair of points is connected by some path. The distance between such vertices vi,vj and in G is the same as the length of the shortest path that joins vertices vi,vj. That is given by d(ui,vj). Harold Wiener [19] introduced the Wiener index of graph G, as the sum of the distances of these pairs of G. Let W I(G) be a Wiener index of a connected graph G, that is,
WI(G)=\sum\limits_{1\le i < j \le n}{}d({{u}_{i}},{{v}_{j}}).
In 1988, the study of the Hosoya polynomial [20] provided necessary content about distance-based graph invariants and also pointed out that the Hosoya polynomial connection with the Wiener index is elementary. Here H(G,λ) denotes Hosoya polynomial and is defined as,
H(G,\lambda )=\sum\limits_{k\le 0}{}d(G,k){{\lambda }^{k}},
where λ is the parameter, and d(G,k) is the number of pairs of vertices of graph G that are at distance k. The relation between the Wiener index W I(G) and the Hosoya polynomial H(G,λ), is reported in [20, 21]:
WI(G)={H}'(G,1),
where H′(G,λ) indicates the first derivative of H(G,λ). Many authors studied these concepts like Hosoya polynomial of tress [22, 23], tori [24], zigzag polyhexnanotorus [25], zig-zag open-ended nanotubes [26], benzenoid graphs [27, 28], cluster graphs [29], Fibonacci and Lucas cubes [30], composite graphs [31], armchair open-ended nanotubes [32], and so on [36,37,38,39,40].
The Hosoya polynomial of a path Pn is given as
H({{P}_{n}},\lambda )=n+(n-1)\lambda +(n-2){{\lambda }^{2}}+\cdots +[n-(n-2)]{{\lambda }^{n-2}}+[n-(n-2)]{{\lambda }^{n-1}}.
Particularly,
H({{P}_{4}},\lambda )={{\lambda }^{3}}+2{{\lambda }^{2}}+3\lambda +4,\quad H({{P}_{3}},\lambda )={{\lambda }^{2}}+2\lambda +3,\quad H({{P}_{2}},\lambda )=\lambda +2.
Function approximation
A function f (x) ∈ L2[0,1] is expanded as:
f(x)=\sum\limits_{i=1}^{n}{}{{a}_{i}}H({{P}_{i}},x)={{A}^{T}}{{H}_{P}}(x),
where HP(x) and A are n × 1 matrices given by:
A=[{{a}_{1}},{{a}_{2}},\cdots, {{a}_{n}}{{]}^{T}},
and
{{H}_{P}}(x)=[H({{P}_{1}},x),H({{P}_{2}},x),\cdots ,H({{P}_{n}},x{{)]}^{T}}.
Hosoya polynomial and Kernel matrix
We can generalize the Hosoya polynomial of a matrix using the collocation points as follows,
{{H}_{n}}(x)=\left\{ \begin{align}& {{H}_{1}}({{x}_{i}})=1 \\ & {{H}_{2}}({{x}_{i}})={{x}_{i}}+2 \\ & {{H}_{3}}({{x}_{i}})=x_{i}^{2}+2{{x}_{i}}+3 \\ & \vdots \\ & {{H}_{n}}({{x}_{i}})=n+(n-1){{x}_{i}}+(n-2)x_{i}^{2}+\cdots +[n-(n-2)]x_{i}^{n-2}+[n-(n-1)]x_{i}^{n-1} \\ \end{align} \right.
where
{{x}_{i}}=\frac{i-0.5}{n},i=1,2,\cdots ,n. Suppose, for n = 2
{{H}_{2}}(x)=\left\{ \begin{align}& {{H}_{1}}({{x}_{i}})=1, \\ & {{H}_{2}}({{x}_{i}})={{x}_{i}}+2. \\ \end{align} \right.
The matrix is of the form where
{{H}_{2\times 2}}=\left[ \begin{matrix} 1 & 1 \\ 2.25 & 2.75 \\\end{matrix} \right]
Similarly, for n = 4
{{H}_{n}}(x)=\left\{ \begin{align}& {{H}_{1}}({{x}_{i}})=1 \\ & {{H}_{2}}({{x}_{i}})={{x}_{i}}+2 \\ & {{H}_{3}}({{x}_{i}})=x_{i}^{2}+2{{x}_{i}}+3 \\ & {{H}_{4}}({{x}_{i}})=x_{i}^{3}+2x_{i}^{2}+3{{x}_{i}}+4. \\ \end{align} \right.
The matrix is of the form
{{H}_{4\times 4}}=\left[ \begin{matrix} 1 & 1 & 1 & 1 \\ 2.12 & 2.37 & 2.62 & 2.87 \\ 3.26 & 3.89 & 4.64 & 5.51 \\ 4.40 & 5.55 & 6.90 & 8.81 \\\end{matrix} \right]
The kernel matrix is the square matrix obtained by evaluating the kernel function k with Hosoya polynomial HP(x). As the number of samples n tends to infinity, certain properties of the kernel matrix show a convergent behavior.
The given kernel function is,
K=\int_{0}^{1}{}k(x,t)[y(t{{)]}^{m}}dt,\,0\le x,t\le 1.
Let us put m = 2,n = 6 then by applying the collocation points
{{K}_{i}}=\int_{0}^{1}{}k({{x}_{i}},t)[y(t{{)]}^{2}}dt,\quad {{x}_{i}}=\frac{i-0.5}{6},i=1,2,\cdots ,6.
Then substitute the approximated truncated series y(x) as
\begin{matrix} y(x)={{A}^{T}}{{H}_{P}}(x), \\ {{K}_{i}}=\int_{0}^{1}{}k({{x}_{i}},t)[{{A}^{T}}{{H}_{P}}(t{{)]}^{2}}dt. \\ \end{matrix}
Then reduce in the form of a 6 × 6 matrix as
\begin{matrix} {{K}_{i}}=({{A}^{T}}{{)}^{2}}\left[ \int_{0}^{1}{}k({{x}_{i}},t)[{{H}_{P}}(t{{)]}^{2}}dt \right], \\ {{[K]}_{6\times 6}}=[A]_{1\times 6}^{2}\left[ {{(k)}_{6\times 6}}(H)_{6\times 6}^{2} \right]. \\ \end{matrix}
In general, we obtain n × n matrix as,
{{[K]}_{n\times n}}=[A]_{1\times n}^{m}\left[ {{(k)}_{n\times n}}(H)_{n\times n}^{m} \right],
which the given kernel function is reduced in the form of kernel matrix up to n × n matrices.
Convergence and error analysis
In this section, we analyze the convergence and error analysis of the present technique,
Theorem 1
Let y(x)∈ L2(ℝ), then the series solution\sum\nolimits_{i=1}^{\infty }{{a}_{i}}H({{P}_{i}},x)converges to y(x).
Proof
We know that L2(ℝ) is an infinite dimensional Hilbert space and {H(P1,x),H(P2,x),⋯} froms a basis of L2(ℝ), where H(Pi,x) are Hosoya polynomials of path Pi. Now we shall show that series
\sum\nolimits_{i=1}^{\infty }{{a}_{i}}H({{P}_{i}},x)
converges to y(x). We have, ai = ⟨y(x),H(Pi,x)⟩, where ⟨·⟩ is the inner product on ℝ.
Let ⟨Sn⟩ be the sequence of partial sums, given by
{{S}_{n}}=\sum\limits_{i=1}^{\infty }{}{{a}_{i}}H({{P}_{i}},x).
This implies, t = y(x) and
\sum\nolimits_{i=1}^{n}{}H({{P}_{i}},x)\to y(x) as n → ∞. Hence the proof.
Theorem 2 gives the error estimation due to Hosoya polynomial expansion.
Theorem 2
Lety(x)\in H_{P}^{n}[0,1]and AT HP(x) is an approximate solution by making use of the Hosoya polynomial. Then bound for error is given by\left\| E(x) \right\|\le \left\| \frac{1}{n{{!2}^{2n-1}}}\underset{x\,\in \left[ 0,1 \right]}{\mathop{max}}\,\left| {{y}^{\left( n \right)}}\left( x \right) \right| \right\|.
Proof
\begin{array}{*{35}{l}} {{\left\| E(x) \right\|}^{2}} & = & \int_{0}^{1}{}{{\left( y(x)-{{A}^{T}}{{H}_{P}}(x) \right)}^{2}}\ dx \\ {} & \le & \int_{0}^{1}{}{{\left( y(x)-{{P}_{n}}(x) \right)}^{2}}\ dx, \\ \end{array}
in which Pn(x) represents the interpolating polynomial of degree n that approximates y(x) over [0,1]. By making use of the maximum error estimate for the polynomial on [0,1], we have
\begin{array}{*{35}{l}} {{\left\| E(x) \right\|}^{2}} & \le & \int_{0}^{1}{}\left( \frac{2}{n{{!4}^{n}}}\underset{x\,\in \left[ 0,1 \right]}{\mathop{max}}\,\left| {{y}^{\left( n \right)}}\left( x \right) \right| \right)dx \\ {} & = & {{\left\| \frac{1}{n{{!2}^{2n-1}}}\underset{x\,\in \left[ 0,1 \right]}{\mathop{max}}\,\left| {{y}^{\left( n \right)}}\left( x \right) \right| \right\|}^{2}} \\ \end{array}
where maximum error bound for the interpolation has been used.
Description of the Hosoya polynomial method
Here, we consider the nonlinear Fredholm integral equation,
y(x)=f(x)+\int_{0}^{1}{}k(x,t)[y(t{{)]}^{m}}dt,\,0\le x,t\le 1.
Using the above defined equation (6), approximate the truncated series as. That is,
y(x)={{A}^{T}}{{H}_{P}}(x),
where HP(x) and A are defined in equation (5) and equation (4).
Substitute equation (7) in equation (6), which results as,
{{A}^{T}}{{H}_{P}}(x)=f(x)+\int_{0}^{1}{}k(x,t)[{{A}^{T}}{{H}_{P}}(t{{)]}^{m}}dt.
Next, we substitute the collocation point
{{x}_{i}}=\frac{i-0.5}{n},i=1,2,\cdots ,n, in equation (8), to obtain,
\begin{matrix} {{A}^{T}}{{H}_{P}}({{x}_{i}})=f({{x}_{i}})+\int_{0}^{1}{}k({{x}_{i}},t)[{{A}^{T}}{{H}_{P}}(t{{)]}^{m}}dt, \\ {{A}^{T}}{{H}_{P}}({{x}_{i}})=f({{x}_{i}})+{{({{A}^{T}})}^{m}}\left[ \int_{0}^{1}{}k({{x}_{i}},t)[{{H}_{P}}(t{{)]}^{m}}dt \right]. \\ \end{matrix}
Thus, the nonlinear integral equation converts into the nonlinear system of algebraic equations with unknown coefficients as,
{{[A]}_{1\times n}}{{[H]}_{n\times n}}=[f{{]}_{1\times n}}+[A]_{1\times n}^{m}[(k{{)}_{n\times n}}(H)_{n\times n}^{m}].
On solving this system of nonlinear algebraic equations, we obtain the Hosoya coefficients A using the Newton’s iterative method and then substitute these coefficients in equation (7). Hence, finally we claim the desired approximate solution of equation (6).
Numerical applications
To demonstrate the capability of this method, we consider a few illustrative applications from the literature and verify the accuracy and efficiency of the results:
Errorfunction=||{{y}_{e}}({{x}_{i}})-{{y}_{A}}({{x}_{i}}{{)||}_{\infty }}=\sqrt{\sum\limits_{i=1}^{n}{}{{[{{y}_{e}}({{x}_{i}})-{{y}_{A}}({{x}_{i}})]}^{2}}},
where, ye is an exact solution and yA is the approximate solution. In this case, we study numerical results that are compared with the exact solution and existing method.
Application 1
Consider the nonlinear Fredholm integral equation [40],
y(x)=\frac{5}{6}{{x}^{2}}-\frac{8}{105}x-1+\int_{0}^{1}{}({{x}^{2}}t+x{{t}^{2}}){{y}^{2}}(t)dt,0\le x\le 1,
which has an exact solution y(x) = x2 − 1. Here we considered n = 3, by the proposed technique to solve equation (11) is reduced into a system of algebraic equations. Solving this system, we get the three unknown Hosoya coefficients as,
{{a}_{1}}=0,{{a}_{2}}=-2,{{a}_{3}}=1,
substituting these coefficients in y(x) = AT HP(x) as,
y(x)={{a}_{1}}(1)+{{a}_{2}}(x+2)+{{a}_{3}}({{x}^{2}}+2x+3),
we obtain y(x) = x2 − 1, which coincides with the exact solution which is the required solution of equation (11). This demonstrates the current technique has stability, accuracy and fast convergence.
Application 2
Consider the nonlinear Fredholm integral equations [7],
y(x)=f(x)+\int_{0}^{1}{}xt{{[y(t)]}^{2}}dt,0\le x\le 1,
where
f(x)={{f}_{1}}(x)-\left( \frac{9}{128}-\frac{9}{32e}+\frac{7}{16{{e}^{2}}}+\frac{1}{16{{e}^{4}}} \right)x,
and
{{f}_{1}}(x)=\left\{ \begin{array}{*{35}{l}} {{e}^{2x-2}},\quad 0\le x < 1/2 \\ -{{x}^{2}}+\frac{1}{e}+\frac{1}{4},\quad 1/2 \le x\le 1 \\ \end{array} \right.
and the exact solution is y(x) = f1(x). Solving equation (12), transform into system of equations using the described method. We obtain the required numerical solution of equation (12), compared with the exact solution and existing method [7] as shown in
Table 1 and
Figure 2 at n = 6. The error analysis is represented graphyically as shown in
Figure 3 compared with existing method [7]. This shows it converges fast and the efficiency, stability and accuracy of the present method.
Comparison of exact, method [7] and HPM with Abs. Error of application 2.
Numerical solution of present method (HPM) with exact solution at n = 6 of application 2.
Fig. 3
Error analysis of HPM at n = 6 with existing method [7] of application 2.
Application 3
Consider the nonlinear Fredholm integral equation [6, 37],
y(x)=exp(x)-\frac{(1+2exp(3))x}{9}+\int_{0}^{1}{}xt{{[y(t)]}^{3}}dt,0\le x<1,
and the exact solution is y(x) = exp(x). Solving equation (13), convert into system of equations using the proposed method. We obtain the approximate solution of equation (13), compared with the exact solution and existing methods [6, 37] as shown in Table 2 and Table 3. Graphically, numerical results and exact solutions are shown in Figure 4. The error analysis is represented graphyically as shown in Figure 5 compared with existing methods [6, 37]. This shows it converges fast, efficiency and the stability and validity of the present method.
Comparison with exact, method [37] and HPM with Abs. Error of application 3.
Numerical solution of present method (HPM) with exact solution at n = 6 of application 3.
Fig. 5
Error analysis of HPM at n = 6 with existing method [6, 37] of application 3.
Application 4
Consider the nonlinear Fredholm integral equations [36],
y(x)=sin(\pi x)+\frac{1}{5}\int_{0}^{1}{}\left[ cos(\pi x)sin(\pi t) \right]{{y}^{3}}(t)dt,0\le x\le 1,
which has an exact solution
y(x)=sin(\pi x)+\frac{1}{3}\left( 20-\sqrt{391} \right)cos(\pi x). On simplifying, equation (14), convert into system of equations using the present technique. We obtain the required numerical solution of equation (14), compared with the exact solution and existing method [36] as shown in Table 4 and Figure 6 at n = 6. The error analysis is represented graphyically as shown in Figure 7 compared with existing method [36]. This shows it converges fast and the stability and validity of the present method.
Comparison with exact, method [36] and HPM with Abs. Error of application 4.
Numerical solution of present method (HPM) with exact solution at n = 6 of application 4.
Fig. 7
Error analysis of HPM at n = 6 with existing method [36] of application 4.
Application 5
Consider the nonlinear Fredholm integral equations [7, 39],
y(x)=exp(x+1)-\int_{0}^{1}{}exp(x-2t){{y}^{3}}(t)dt,0\le x\le 1,
which has an exact solution y(x) = exp(x). On simplifying, equation (15), transform into system of equations using the present technique. We obtain the required approximate solution of equation (15) and exact solution are shown in Table 5 and compared with the existing methods [7, 39] as shown in Table 6. Graphically, numerical results and exact solutions are shown in Figure 8, and Figure 9 shows error analysis for n = 3 and n = 6 compared with existing methods [7, 39]. This demonstrates the fast convergence and its stability.
Numerical solution of present method(HPM) with Abs. Error of application 5.
x
Exact solution
HPM at n = 3
Abs. Error at n = 3
HPM at n = 6
Abs. Error at n = 6
0.1
1.105170918
1.110184967
5.01E-03
1.105195722
2.48E-05
0.2
1.221402758
1.219896572
1.51E-03
1.221455388
5.26E-05
0.3
1.349858808
1.346250257
3.61E-03
1.349857068
1.74E-06
0.4
1.491824698
1.489246022
2.58E-03
1.491797307
2.74E-05
0.5
1.648721271
1.648883866
1.63E-04
1.648712515
8.76E-06
0.6
1.822118800
1.825163790
3.04E-03
1.822138610
1.98E-05
0.7
2.013752707
2.018085793
4.33E-03
2.013770656
1.79E-05
0.8
2.225540928
2.227649877
2.11E-03
2.225522500
1.84E-05
0.9
2.459603111
2.453856039
5.75E-03
2.459586419
1.67E-05
Comparison with exact, HPM and existing method with error analysis of application 5.
Numerical solution of present method (HPM) with exact solution at n = 6 of application 5.
Fig. 9
Error analysis of HPM at n = 3 and n = 6 with existing method [7, 39] of application 5.
Conclusions
This paper explores the solution of nonlinear Fredholm integral equations using the Hosoya polynomial method. Through illustrative applications, we transform integral equations into algebraic systems and represent them in matrix form using Matlab. By simplifying the nonlinear system with the help of Newton’s iterative method, we obtain the Hosoya coefficients. Substituting these coefficients into the function approximation, we obtain the required approximate solution. The resulting numerical solutions are compared with exact solutions and existing methods presented in tables and figures. The error analysis is demonstrated through the graphical representations. The proposed technique represents its efficiency, validity, accuracy, stability, and convergence as compared to the existing method. In the future, we can apply this technique to solve real-world problems in diverse fields, including physical models, fluid dynamics, and engineering problems.