1. bookAHEAD OF PRINT
Zeitschriftendaten
License
Format
Zeitschrift
eISSN
2444-8656
Erstveröffentlichung
01 Jan 2016
Erscheinungsweise
2 Hefte pro Jahr
Sprachen
Englisch
access type Uneingeschränkter Zugang

Application of Sobolev-Volterra projection and finite element numerical analysis of integral differential equations in modern art design

Online veröffentlicht: 22 Nov 2021
Volumen & Heft: AHEAD OF PRINT
Seitenbereich: -
Eingereicht: 17 Jun 2021
Akzeptiert: 24 Sep 2021
Zeitschriftendaten
License
Format
Zeitschrift
eISSN
2444-8656
Erstveröffentlichung
01 Jan 2016
Erscheinungsweise
2 Hefte pro Jahr
Sprachen
Englisch
Abstract

The article uses the Spss statistical analysis software to establish a multiple linear regression model of short-term stock price changes in domestic agricultural listed companies. It uses a stable time series based on the ARMA model for stable agricultural value-added, fiscal expenditure and market interest rates. The regression method is used to study its impact on the stock price index. Compared with the existing stock forecasting methods, this method has simple data collection and no specific requirements for data selection, and the prediction results have a high degree of fit. Therefore, this method is suitable for most stocks.

MSC 2010

Introduction

In the traditional art design process, if designers have inspiration and ideas, they must use tools to transfer their design to reality. This requires human labour, so the quantity and quality of creation can be limited. living. Moreover, multiple works cannot be produced at the same time, which is a disadvantage of traditional creative methods. Now, because computers provide the level of technology needed in the field of art creation, they can help designers develop their inspirations well, and they can use less time to realise their inspirations [1].

The so-called evolutionary design refers to the use of a new evolutionary computing method in the concept of computer design, which can be fully used in the field of design. The first computer evolution program ‘Blind Watchmaker’ in history was proposed by Rudolph; it is mainly used to simulate tree clusters. Fogarty puts forward the theory that genetic algorithms (GAs) can be used in the improvement and optimisation of design, and explains the theoretical framework of the entire design [2]. His design system can be used for the production of vehicles and seats. The use of computer-aided evolutionary design in the field has opened a new door for evolutionary art. In addition to this, there are people who have applied computer simulation technology in more fields, such as table lamps and sculptures in artworks, making the application of this technology more extensive. There are also people who use GAs in the design of building plans. Andrew Row bottom wrote a program called Form, which for the first time turned evolutionary design into a three-dimensional (3D) representation [3]. Matthew Lewis used the technology of evolutionary models in the fields of compositing colours, making cartoon characters, changing fonts and designing exterior shapes of cars, and successfully created interactive design systems. Domestic research and development scholars such as Liu Hong, Tang Mingxi and Liu Xiyu use computers to support innovative design of appearance modelling. Sun Shouqian, Zhang Lishan, Huang Qi and others used CAD to realise the problems of colour expression, design sketches and structural patterns [4].

In order to optimise the standard GA and improve some existing shortcomings, this paper uses a design model of illustration art, and the model is based on cluster optimisation and operator GA, and the model is simulated through more innovative experiments [5].

Volterra integral equation
Volterra integral equations of the first kind
Linear Volterra integral equations of the first kind

These equations are shaped as axk(x,y)φ(y)dy=f(x) \int_a^x k(x,y)\varphi (y)dy = f(x)

A solution to the first type of Volterra integral equation

In some cases, Volterra integral equations of the first type can usually be transformed into solutions of the Volterra integral equations of the second type. Generally, the two sides of the equation are differentiated when the equations k(x,y), f (x) are differentiable, and k(x,y) ≠ 0, the equations of the first type are transformed. The form of the second type is: ϕ(x)+axkx(x,y)k(x,y)ϕ(y)dy=f(x)k(x,y) \phi (x) + \int_a^x {{k_x^\prime (x,y)} \over {k(x,y)}}\phi (y)dy = {{{f^\prime}(x)} \over {k(x,y)}} In this way, it can be solved by the second type of integral equation [6]. A special type of Volterra integral equation is the Abel equation, whose form is: axϕ(y)(xy)ady=f(x) \int_a^x {{\phi (y)} \over {{{(x - y)}^a}}}dy = f(x) When x = y, the equation appears weakly singular. The solution can be based on the theorem assuming that the free term f (x) of the Abel integral equation axϕ(y)(xy)ady=f(x) \int_a^x {{\phi (y)} \over {{{(x - y)}^a}}}dy = f(x) is continuously differentiable, and f (a) = 0, then it has a unique solution. ϕ(x)=sinπaπddxaxf(y)(xy)1ady \phi (x) = {{\sin \pi a} \over \pi } - {d \over {dx}}\int_a^x {{f(y)} \over {{{(x - y)}^{1 - a}}}}dy

Volterra integral equations of the second kind
The second type of linear Volterra integral equation

The form of the second type of equation is: ϕ(x)λaxk(x,y)ϕ(y)dy=f(x) \phi (x) - \lambda \int_a^x k(x,y)\phi (y)dy = f(x) where ϕ(x) is the required unknown function and λ is a known or need to be discussed parameter [7]. Like Fredholm's equation, k(x.y) is a known function, called the kernel of Volterra equation. When k(x.y) = 0, Volterra equation can be regarded as a special form and the theory of Fredholm equation is suitable for the Volterra equation. The Volterra equation has its own characteristics. For example, it has no eigenvalues, and it has solutions to arbitrary free terms. The Volterra equation of the first type can be transformed into the second type under certain conditions. Like the linear Fredholm integral equations of the second type, the linear Volterra integral equations of the second type also have their own iterations and solutions [8]. The methods for deriving the iterations and solutions are the same as those of the second type of Fredholm [9].

Iteration: Hypothesis k2(x,y)=yxk(x,t)k(t,y)dt {k_2}(x,y) = \int_y^x k(x,t)k(t,y)dt

Then: ϕ2(x)=axk2(x,y)f(t)dt {\phi _2}(x) = \int_a^x {k_2}(x,y)f(t)dt ϕn(x)=axkn(x,y)f(y)dy {\phi _n}(x) = \int_a^x {k_n}(x,y)f(y)dy We call kn(x,y) as the iterative kernel of the Volterra equation. We call R(x,y;λ)=n=1λn1kn(x,y) R(x,y;\lambda ) = \sum\limits_{n = 1}^\infty {\lambda ^{n - 1}}{k_n}(x,y) a de-nuclear. If the overlapping kernel of the equation is known, the solution kernel and the solution of the equation can be obtained.

The solution of the second type of linear Volerra integral equation:

Suppose the equation has a solution of the form ϕ(x)=ϕ0(x)+ϕ1(x)λ++ϕnλn=i=0nϕi(x)λi \phi (x) = {\phi _0}(x) + {\phi _1}(x)\lambda + \cdots + {\phi _n}{\lambda ^n} = \sum\limits_{i = 0}^n {\phi _i}(x){\lambda ^i} If the equation has a solution for the successive method, solving the equation generally makes φ0=f(x)φ1=f(x)+axk(x,y)φ0dyφ2=f(x)+axk(x,y)φ1dyφn=f(x)+axk(x,y)φn1dy \matrix{ {{\varphi _0} = f(x)} \hfill \cr {{\varphi _1} = f(x) + \int_a^x k(x,y){\varphi _0}dy} \hfill \cr {{\varphi _2} = f(x) + \int_a^x k(x,y){\varphi _1}dy} \hfill \cr {{\varphi _n} = f(x) + \int_a^x k(x,y){\varphi _{n - 1}}dy} \hfill \cr } Then, for the above series to converge, that is, for series i=0nφi(x)λi \sum\limits_{i = 0}^n {\varphi _i}(x){\lambda ^i} , it can be proved that there is a solution for any parameter λ equation, according to the theorem: if the kernel k(x, y) and the free term f (x) are continuous real functions [10]. Then the second type Linear Volterra equation is ϕ(x)λaxk(x,y)ϕ(y)dy=f(x) \phi (x) - \lambda \int_a^x k(x,y)\phi (y)dy = f(x) There is a unique continuous solution for any parameter λ, and the solution can be obtained by successive approximation.

Iteration: Hypothesis k2(x,y)=yxk(x,t)k(t,y)dt {k_2}(x,y) = \int_y^x k(x,t)k(t,y)dt

Then φ2(x)=axk2(x,y)f(t)dt...φn(x)=axkn(x,y)f(y)dy \matrix{ {{\varphi _2}(x) = \int_a^x {k_2}(x,y)f(t)dt} \cr {...} \cr {{\varphi _n}(x) = \int_a^x {k_n}(x,y)f(y)dy} \cr } We call kn(x,y) the iterative kernel of the Volterra equation. We call R(x,y;λ)=n=1λn1kn(x,y) R(x,y;\lambda ) = \sum\limits_{n = 1}^\infty {\lambda ^{n - 1}}{k_n}(x,y) the solution kernel. If the overlapping kernels of the equation are known, the solution kernel can be obtained and the solution of the equation can be obtained [11].

Nonlinear Volterra integral equations of the second kind

The form of the nonlinear Volterra integral equation of the second kind is as follows: ϕ(x)λaxk(x,y)F(ϕ(y)dy=f(x) \phi (x) - \lambda \int_a^x k(x,y)F(\phi (y)dy = f(x) The unknown function is ϕ(x), and f (x), k(x,y), F(x), and are all known. When the equation meets certain conditions, it can be solved by successive approximation. For the first type of nonlinear Volterra integral equation, it can be solved by transforming into the second type of nonlinear Volterra integral equation [12]. For the specific conversion process, see the numerical solution of the nonlinear Volterra integral equation [13].

The solution of convolutional Volterra equation
Solutions to the convolution volterra integral equation of the second kind

It is shaped as: ϕ(x)=f(x)+axk(xy)ϕ(y)dy \phi (x) = f(x) + \int_a^x k(x - y)\phi (y)dy Called the second type of convolution Volterra integral equation, this type of equation is generally solved by Laplace transform. If f (x), k(x,y) in the equation is a sufficiently smooth, exponential order function, the solution of the equation is also exponential order, so that the equation can be solved by Laplace transform. Let ϖ {k(x)} = K(p), ϖ {f (x)} = F(p), ϖ {ϕ(x)} = ϕ(p). and Laplace transform the two sides of the equation; we can get. ϕ(p) = F(p) + K(p)ϕ(p) solve ϕ(p)=F(p)1K(p) \phi (p) = {{F(p)} \over {1 - K(p)}} . and when K(p) ≠ 1, ϕ(x)=ϖ1{F(p)1K(p)} \phi (x) = {\varpi ^{ - 1}}\left\{ {{{F(p)} \over {1 - K(p)}}} \right\} .

For the first type of Volterra integral equation, that is, Equation axk(xt)ϕ(t)dt=f(x) \int_a^x k(x - t)\phi (t)dt = f(x) also performs Laplace transformation on both sides of the equation, and can be solved for ϕ(p)=F(p)K(p) \phi (p) = {{F(p)} \over {K(p)}} , so the solution of the equation is ϕ(x)=ϖ1{F(p)K(p)} \phi (x) = {\varpi ^{ - 1}}\left\{ {{{F(p)} \over {K(p)}}} \right\} .

The solution of the nonlinear Volterra integral equation Laplace transform is ϕ(x)=f(x)+λ0xϕ(y)ϕ(xy)dy \phi (x) = f(x) + \lambda \int_0^x \phi (y)\phi (x - y)dy Let ϖ {ϕ(x)} = ϕ(p), ϖ { f (x)} + F(p) and Laplace transform both sides of the equation, we can get ϕ(x)=ϖ1{1±14λF(p)2λ} \phi (x) = {\varpi ^{ - 1}}\left\{ {{{1 \pm \sqrt {1 - 4\lambda F(p)} } \over {2\lambda }}} \right\} When 1±14λF(p)2λ {{1 \pm \sqrt {1 - 4\lambda F(p)} } \over {2\lambda }} exists, the solution is the Laplace transform of the nonlinear Volterra integral equation.

Knowledge of integral numerical solution
Newton-Cotes integral quadrature formula

Here we mainly discuss the numerical calculation of abf(x)dx \int_a^b f(x)dx . It can be assumed that f (x) is integrable on [a,b]. In some cases, the function f (x) is not integrable and can be represented by an elementary function. Therefore, here we will study the numerical solution of function integration.

To calculate the integral I(f) = ∫ f (x)W (x)dx, where W (x) is a weight function, you can assume that f (x) is at n+1 different points: the values of E ax1 < x2 < ⋯ < xn+1b are: f (x1), f (x2), . . . , f (xn+1) and you can use the linear combination of f (x1), f (x2), . . . , f (xn+1) to get the integral An approximate solution of In(f) ≈ I(f), where In=i=1n+1Aif(xi) {I_n} = \sum\limits_{i = 1}^{n + 1} {A_i}f({x_i}) where En(f) is the discrete error: Ai=abliW(x)dxli(x)=wn+1(xxi)wn+1(xi)wn+1(x)=(xx1)(xx2)(xxn+1)i=1,2,,n+1 \begin{align}{A_i} &= \int_a^b {{l_i}W(x)dx} \\ {l_i}(x) &= \frac{{{w_{n + 1}}}}{{(x - {x_i}){{w'}_{n + 1}}({x_i})}}\\ {w_{n + 1}}(x) &= (x - {x_1})(x - {x_2}) \cdots (x - {x_{n + 1}})\\ i &= 1,2, \cdots ,n + 1\end{align} When we assume that [a,b] is a finite interval, W (x) = 1 and divide the interval into n equal parts, take the equidistant base point as: a=x1<x2<<xn+1=b, a = {x_1} < {x_2} < \cdots < {x_{n + 1}} = b, the step length is h=xi+1xi=ban h = {x_{i + 1}} - {x_i} = {{b - a} \over n} , and according to the difference quadrature formula obtained above, the Newton-Cotes integral quadrature formula can be obtained, that is, In(f)=i=1n+1Aif(xi) {I_n}(f) = \sum\limits_{i = 1}^{n + 1} {A_i}f({x_i}) , where Ai=wn+1(x)(xxi)wn+1(xi)dx=(1)n+1ih(ii)!(n+1i)!0nt(t1)(t(i2))(ti)(tn)dt {A_i} = \int {{{w_{n + 1}}(x)} \over {(x - {x_i}){{w^\prime}_{n + 1}}({x_i})}}dx = ( - {1)^{n + 1 - i}}{h \over {(i - i)!(n + 1 - i)!}}\int_0^n t(t - 1) \cdots (t - (i - 2))(t - i) \cdots (t - n)dt In the Newton-Cotes integral quadrature formula, when n = 1, a trapezoidal formula can be obtained, let x1 = a, x2 = b be a formula based on the Newton-Cotes integral quadrature formula: I1(f)=ba2(f(a)+f(b)), {I_1}(f) = {{b - a} \over 2}(f(a) + f(b)), where A1=ba2 {A_1} = {{b - a} \over 2} , A2=ba2 {A_2} = {{b - a} \over 2} . If you set n = 2, you can get the Simpson formula.

Compound trapezoidal formula

We assume that the localisation of the function in the integral in question is [a,b], and take n + 1 mutually distinct base points in this interval, that is, a = x1 < x2 < ⋯ < xn+1 = b, And take the step size h=xi+1xi=ban h = {x_{i + 1}} - {x_i} = {{b - a} \over n} . Use the body size formula on the subinterval [xi+1,xi], so abf(x)dx=i=1nxixi+1f(x)dx=h2i=1n[f(xi)+f(xi+1)]h12i=1nf(ξi) \int_a^b f(x)dx = \sum\limits_{i = 1}^n \int_{{x_i}}^{{x_{i + 1}}} f(x)dx = {h \over 2}\sum\limits_{i = 1}^n \left[ {f({x_i}) + f({x_{i + 1}})} \right] - {h \over {12}}\sum\limits_{i = 1}^n {f^{''}}({\xi _i}) This leads to: abf(x)dx=h2[f(a)+f(b)+2i=1n1f(a+ih)]h12i=1nf(ξi) \int_a^b f(x)dx = {h \over 2}\left[ {f(a) + f(b) + 2\sum\limits_{i = 1}^{n - 1} f(a + ih)} \right] - {h \over {12}}\sum\limits_{i = 1}^n {f^{''}}({\xi _i}) Rounding out the h12i=1nf(ξi) {h \over {12}}\sum\limits_{i = 1}^n {f^{''}}({\xi _i}) term, we get the compound trapezoidal formula: abf(x)dx=h2[f(a)+f(b)+2i=1n1f(a+ih)] \int_a^b f(x)dx = {h \over 2}\left[ {f(a) + f(b) + 2\sum\limits_{i = 1}^{n - 1} f(a + ih)} \right]

Numerical solution method of integral equation
Unknown function expansion method

Here, we will discuss the role of a complete function system in L2(a,b) in solving approximate equations [14]. These function systems can be orthogonal or non-orthogonal. You can use the finite terms of a function system as an equation the approximate value of the solution. Let the function system be {φi}i=1n \left\{ {{\varphi _i}} \right\}_{i = 1}^n , where each function of the function system is linearly independent. You can make the solution of the integral equation ϕ(x)i=1nciφi \phi (x) \approx \sum\limits_{i = 1}^n {c_i}{\varphi _i} , and substitute the approximate function with the integral equation: ϕ(x)λaxk(x,y)ϕ(y)dy=f(x) \phi (x) - \lambda \int_a^x k(x,y)\phi (y)dy = f(x) So, you get i=1nciφiλi=1ncik(x,y)φidy+f(x) \sum\limits_{i = 1}^n {c_i}{\varphi _i} \approx \lambda \sum\limits_{i = 1}^n {c_i}\int k(x,y){\varphi _i}dy + f(x) Organise it into: i=1nciφiλi=1ncik(x,y)φidyf(x)=R(x) \sum\limits_{i = 1}^n {c_i}{\varphi _i} - \lambda \sum\limits_{i = 1}^n {c_i}\int k(x,y){\varphi _i}dy - f(x) = R(x) where R(x) is the residual; if R(x) can be equal to zero, then the approximate solution of the equation is equal to the exact solution of the equation, but in general, it is difficult to make R(x) equal to zero [15]. Generally, it can be ignored when R(x) is small. A numerical solution of the equation is obtained as: i=1nciφi=λi=1ncik(x,y)φidy+f(x) \sum\limits_{i = 1}^n {c_i}{\varphi _i} = \lambda \sum\limits_{i = 1}^n {c_i}\int k(x,y){\varphi _i}dy + f(x) However, different requirements for residuals can lead to different solutions. Generally, these are the following solutions: collocation method [16], Galerkin method and least square method. If the residual is required to satisfy R(xi) equal to zero at the selected base point, where {xk}k=1n \left\{ {{x_k}} \right\}_{k = 1}^n is some different base points, then we can get the following equations: i=1nciφi(xk)λi=1nciaxkk(xk,y)φi(y)dy=f(xk)k=1,2,,n \sum\limits_{i = 1}^n {c_i}{\varphi _i}({x_k}) - \lambda \sum\limits_{i = 1}^n {c_i}\int_a^{{x_k}} k({x_k},y){\varphi _i}(y)dy = f({x_k})\quad k = 1,2, \cdots ,n Solving the equations can obtain the expansion coefficient {ci}i=1n \left\{ {{c_i}} \right\}_{i = 1}^n . For the moment method used in the Fredholm integral equation, ϕ(x)λabk(x,y)ϕ(y)dy=f(x) \phi (x) - \lambda \int_a^b k(x,y)\phi (y)dy = f(x) The moment method requires that the moment of the residual from the origin to the nth order is zero, that is, abR(x)xkdy=0 \int_a^b R(x){x^k}dy = 0 , and the following equations can be obtained: i=1nciabφi(x)xkdxλi=1nciababk(x,y)φi(y)dyxkdx=abf(x)xkdx \sum\limits_{i = 1}^n {c_i}\int_a^b {\varphi _i}(x){x^k}dx - \lambda \sum\limits_{i = 1}^n {c_i}\int_a^b \left\lceil {\int_a^b k(x,y){\varphi _i}(y)dy} \right\rceil {x^k}dx = \int_a^b f(x){x^k}dx Solving this equation can get the expansion coefficient {ci}i=1n \left\{ {{c_i}} \right\}_{i = 1}^n . The Galerkin method requires that the residual function R(x) in the square integrable space, that is, the inner product of space L2[a,b] and function φi is zero, that is, abR(x)φidx=0 \int_a^b R(x){\varphi _i}dx = 0 , i = 1.2, ⋯ , n. So the expansion coefficient can be used to determine {ci}i=1n \left\{ {{c_i}} \right\}_{i = 1}^n by taking the first n functions φi(i = 1.2, ⋯ , n) in the function system and the integral equation on [a, b]. i=1nciφi(x)λi=1nciabk(x,y)φi(y)dy=f(xk) \sum\limits_{i = 1}^n {c_i}{\varphi _i}(x) - \lambda \sum\limits_{i = 1}^n {c_i}\int_a^b k(x,y){\varphi _i}(y)dy = f({x_k}) The two ends are orthogonal, so that ϕn = ciφi expansion coefficient satisfies the following linear equations: (φn(x),φi(x))=λ(abk(x,y)φndy,φi(x))+(f(x),φi(x))ai=1.2,,n \left( {{\varphi ^n}(x),{\varphi _i}(x)} \right) = \lambda \left( {\int_a^b k(x,y){\varphi ^n}dy,{\varphi _i}(x)} \right) + \left( {f(x),{\varphi _i}(x)} \right)a\quad i = 1.2, \cdots ,n where ((f(x),g(x))=abf(x)g(x)dx (\left( {f(x),g(x)} \right) = \int_a^b f(x)g(x)dx Solve the system of equations to get the expansion coefficient {ci}i=1n \left\{ {{c_i}} \right\}_{i = 1}^n

Expansion of integral kernel series

The integral kernel series expansion method, also known as the degenerate kernel approximation method, uses a certain expansion method to expand the non-degraded kernel into an approximately degraded kernel. The general expansion methods include Taylor series and Fourier series expansions. A linearly independent function system for the approximate expansion of a known function in L2 [a,b] space. If Taylor expansion is used, then care should be taken to retain the appropriate number of series terms. Generally, the series terms should be determined according to the size of the integration limit. There are also solutions to unknown functions that expand the unknown function to the integral equation.

The following estimation methods are used to approximate the error of the integral equation kernel with a degenerate kernel:

Theorem

[17] Let k(x,y) be the approximate degenerate kernel of the integral equation kernel, and satisfy the conditions for the degenerate kernel ab|k(x,y)k˜(x,y)|dt<h \int_a^b \left| {k(x,y) - \tilde k(x,y)} \right|dt < h

The solution kernel R(x,y : λ) of the integral equation with the degenerate kernel k˜(x,y) \tilde k(x,y) as the integral kernel holds that ab|R(x,y;λ)|dt<R \int_a^b \left| {R(x,y;\lambda )} \right|dt < R Integral equation ϕ(x)=λabk(x,y)ϕ(y)dy \phi (x) = \lambda \int_a^b k(x,y)\phi (y)dy Of solution ϕ(x) and solution φ˜(x,y) \tilde \varphi (x,y) of the integral equation replaced with an approximate degenerate kernel, satisfying |ϕ˜(x)ϕ(x)|<B|λ|(1+|λ|R)2h1|λ|h(1+|λ|R) \left| {\tilde \phi (x) - \phi (x)} \right| < {{B\left| \lambda \right|{{(1 + \left| \lambda \right|R)}^2}h} \over {1 - \left| \lambda \right|h(1 + \left| \lambda \right|R)}} In the formula, B is an upper bound of f (x).

Numerical solution of nonlinear Volterra integral equation

ϕ(x)+λaxk(x,y)F(ϕ(y)dy=f(x) \phi (x) + \lambda \int_a^x k(x,y)F(\phi (y)dy = f(x) We assume that f (x), k(x,y), F(x) are continuous functions in their domains, and use numerical evaluation formulas φiλm=1iAmKimF(ϕm)=fii=1,2,,n, {\varphi _i} - \lambda \sum\limits_{m = 1}^i {A_m}{K_{im}}F({\phi _m}) = {f_i}\quad i = 1,2, \ldots ,n, φi = φ(i), Am is the weight coefficient in the numerical integration formula, Kim = k(xi, xm), fi = f (xi). The system of equations is a system of lower triangular equations of order n, ϕa) = f (a). With this we can solve n numerical solutions along the top of the system of equations, so we get the nearsighted solution of the equation ϕx)=λm=1iAmk(xi,xm)F(ϕm)+f(xi)i=1,2,,n \phi x) = \lambda \sum\limits_{m = 1}^i {A_m}k({x_i},{x_m})F({\phi _m}) + f({x_i})\quad i = 1,2, \ldots ,n When n approaches infinity, the solution also tends to be exact.

We divide the integration interval [a,b] into n equal small intervals [xi,xi+1], i = 1,2, ⋯ , n, where we take the step size as h=ban h = {{b - a} \over n} Compound trapezoidal formula: Tn(f)=12[f(a)+f(b)+2i=1n1f(a+ih)] {T_n}(f) = {1 \over 2}\left[ {f(a) + f(b) + 2\sum\limits_{i = 1}^{n - 1} f(a + ih)} \right] We apply this formula to the integral equation and we get ϕ(xi)λ[h2(k(xi,x1)F(ϕ(x1))+k(xi,xi)F(ϕ(xi))+2j=1i1k(xi,xj+1)F(ϕ(xj+1)))]=f(xi) \phi ({x_i}) - \lambda \left[ {{h \over 2}(k({x_{i,}}{x_1})F(\phi ({x_1})) + k({x_i},{x_i})F(\phi ({x_i})) + 2\sum\limits_{j = 1}^{i - 1} k({x_i},{x_{j + 1}})F(\phi ({x_{j + 1}})))} \right] = f({x_i}) In this way, we can get a system of equations of the lower triangle row. By touching the solution of this system of equations, we can get n numerical solutions of the equation, and then fitting by interpolation, we can get an approximate solution.

First, in a n-dimensional space, assuming a linearly independent set of basis {ei(x)}i=1n \left\{ {{e_i}(x)} \right\}_{i = 1}^n can make the unknown approximate solution ϕn(x)=i=1nciei(x) {\phi _n}(x) = \sum\limits_{i = 1}^n {c_i}{e_i}(x) , as long as the value of {ci}i=1n \left\{ {{c_i}} \right\}_{i = 1}^n can be solved, the approximate solution of the equation can be obtained. Then the projection V (x) ∈ [a,b] of V (x) in the n-dimensional coordinate system is pnV(x)=i=1nV(xi)ei(x) {p_n}V(x) = \sum\limits_{i = 1}^n V({x_i}){e_i}(x) , xi which is the interpolation point, so ϕn(x)=pnϕ(x)=i=1nciei {\phi _n}(x) = {p_n}\phi (x) = \sum\limits_{i = 1}^n {c_i}{e_i} Multiply both sides of the equation by pn to get: pnϕ(x)+pnaxk(x,y)F(ϕ(x))dy=pnf(x) {p_n}\phi (x) + {p_n}\int_a^x k(x,y)F(\phi (x))dy = {p_n}f(x) Putting formula (41) into (42), we get i=1nciei(x)+i=1naxk(xi,y)F(φ(x))ei(x)dy=i=1nf(xi)ei(x)i=1n[ciei(x)+ei(x)k(xi,y)F(φ(y))dyf(xi)ei(x)]=0i=1nei(x)[ci+axik(xi,y)dyf(xi)]=0ci+axik(xi,y)F(φ(y))dyf(xi)=0ci+j=1ik(xi,y)F(ciei(yj))Anf(xi)=0 \begin{align}\sum\limits_{i = 1}^n {{c_i}{e_i}(x)} + \sum\limits_{i = 1}^n \int_a^x {k({x_i},y)F(\varphi (x)){e_i}(x)dy} &= \sum\limits_{i = 1}^n{f({x_i}){e_i}(x)} \\ \sum\limits_{i = 1}^n \left[ {{c_i}{e_i}(x) + {e_i}(x)\int {k({x_i},y)F(\varphi (y))dy - f({x_i}){e_i}(x)} } \right] &= 0\\ \sum\limits_{i = 1}^n {e_i}(x)\left[ {{c_i} + \int_a^{{x_i}} {k({x_i},y)dy} - f({x_i})} \right] &= 0 \\ {c_i} + \int_a^{{x_i}} {k({x_i},y)F(\varphi (y))dy} - f({x_i}) &= 0\\ {c_i} + \sum\limits_{j = 1}^i k({x_i},y)F({c_i}{e_i}({y_j})){A_n} - f({x_i}) &=0\end{align}

Case study
Algorithm simulation

In order to verify the feasibility of the algorithm, the unimodal and multimodal functions are used to improve the standard GA and the improvements proposed in this paper. The algorithm K-GA is tested. The unimodal function is as follows: minf1(x,y)=100(yx2)2+(x1)2 \min {f_1}\left( {x,y} \right) = 100 \cdot {\left( {y - {x^2}} \right)^2} + {\left( {x - 1} \right)^2} The multimodal function is as follows: maxf2(x,y)=21.5+xsin(4πx)+ysin(20πy) \max {f_2}\left( {x,y} \right) = 21.5 + x\sin (4\pi x) + y\sin (20\pi y) The average running algebraic test results of the standard GA and the improved algorithm proposed in this paper are shown in Table 1.

Algorithm simulation results of test functions.

Function method f1 f2

GA K-GA GA K-GA

Cross probability 1 56.28 7.32 12.67 4.71
0.9 57.03 7.84 13.61 5.38
0.8 60.05 8.09 13.88 6.37
0.7 61.44 8.67 15.58 6.94
0.6 62.49 9.52 19.71 7.54
0.5 63.58 10.49 20.85 8.06

It can be seen from Table 1 that for the two functions, when the crossover probability is set to 1, for any one test function, the average running generation number is the least, and the average running generation number decreases gradually as the crossover probability increases. Therefore, the greater the crossover probability, the greater the probability of producing outstanding individuals, which improves the performance of the GA.

Case analysis

Taking the illustration design of the hexagon fractal as an example, the design results using the standard GA and the improved algorithm proposed in this paper are shown in Figures 1 and 2:

Fig. 1

Fractal image illustration art design of standard algorithm.

Fig. 2

Artistic design of fractal image illustration with improved algorithm.

By comparing Picture 1 and Picture 2, we can find that the optimised algorithm mentioned in the article can be used in illustration art and is innovative.

Summary

Until now, the development of computer technology has continued to advance globally, and the use of evolutionary technology to help product innovation and design is an important way. How to improve the algorithm of this computer technology and how this technology can be better used and practiced in the field of design has been valued and studied by more and more people. In this article, I wrote out some shortcomings of some standard GAs, and conducted an illustration art design experiment on the operator and cluster optimisation GA. It was explained after the experiment that the algorithm after optimisation is more creative than design algorithms without optimisation.

Fig. 1

Fractal image illustration art design of standard algorithm.
Fractal image illustration art design of standard algorithm.

Fig. 2

Artistic design of fractal image illustration with improved algorithm.
Artistic design of fractal image illustration with improved algorithm.

Algorithm simulation results of test functions.

Function method f1 f2

GA K-GA GA K-GA

Cross probability 1 56.28 7.32 12.67 4.71
0.9 57.03 7.84 13.61 5.38
0.8 60.05 8.09 13.88 6.37
0.7 61.44 8.67 15.58 6.94
0.6 62.49 9.52 19.71 7.54
0.5 63.58 10.49 20.85 8.06

Urve Kangro. 2017. Cordial Volterra integral equations and singular fractional integro-differential equations in spaces of analytic functions. Mathematical Modelling & Analysis, 22(4), pp. 548–567. KangroUrve 2017 Cordial Volterra integral equations and singular fractional integro-differential equations in spaces of analytic functions Mathematical Modelling & Analysis 22 4 548 567 10.3846/13926292.2017.1333970 Search in Google Scholar

Ermine Oganesovna Azizian, & Khachatur Aghavardovich Khachatryan. 2016. One-parametric family of positive solutions for a class of nonlinear discrete hammerstein-volterra equations. Ufa Mathematical Journal, 8(1), pp. 13–19. AzizianErmine Oganesovna KhachatryanKhachatur Aghavardovich 2016 One-parametric family of positive solutions for a class of nonlinear discrete hammerstein-volterra equations Ufa Mathematical Journal 8 1 13 19 10.13108/2016-8-1-13 Search in Google Scholar

Yunxia Wei, Yanping Chen, & Yunqing Huang. 2018. Legendre collocation method for Volterra integro-differential algebraic equation. Journal of Scientific Computing, 19(3), pp. 672–688. WeiYunxia ChenYanping HuangYunqing 2018 Legendre collocation method for Volterra integro-differential algebraic equation Journal of Scientific Computing 19 3 672 688 10.1515/cmam-2018-0016 Search in Google Scholar

Hounie, Jorge, & Picon, Tiago. 2016. L1 sobolev estimates for (pseudo)-differential operators and applications. Mathematische Nachrichten, 289(14–15), pp. 1838–1854. HounieJorge PiconTiago 2016 L1 sobolev estimates for (pseudo)-differential operators and applications Mathematische Nachrichten 289 14–15 1838 1854 10.1002/mana.201500017 Search in Google Scholar

Ibrahim, R. W., Ahmad, M. Z., & Mohammed, M. J. 2016. Periodicity and positivity of a class of fractional differential equations., 5(1), pp. 824. IbrahimR. W. AhmadM. Z. MohammedM. J. 2016 Periodicity and positivity of a class of fractional differential equations 5 1 824 10.1186/s40064-016-2386-z Search in Google Scholar

M. A Abdou, & S. A Raad. 2016. Nonlocal solution of a nonlinear partial differential equation and its equivalent of nonlinear integral equation. Journal of Computational & Theoretical Nanoscience, 13(7), pp. 4580–4587. AbdouM. A RaadS. A 2016 Nonlocal solution of a nonlinear partial differential equation and its equivalent of nonlinear integral equation Journal of Computational & Theoretical Nanoscience 13 7 4580 4587 10.1166/jctn.2016.5323 Search in Google Scholar

GERT AUDRING. 2017. Isomorphism theorems for some parabolic initial-boundary value problems in hörmander spaces. Open Mathematics, 15(1), pp. 57–76. GERT AUDRING 2017 Isomorphism theorems for some parabolic initial-boundary value problems in hörmander spaces Open Mathematics 15 1 57 76 10.1515/math-2017-0008 Search in Google Scholar

Min Cai, & Changpin Li. 2019. Regularity of the solution to riesz-type fractional differential equation. Integral Transforms and Special Functions, 30(2), pp. 1–32. CaiMin LiChangpin 2019 Regularity of the solution to riesz-type fractional differential equation Integral Transforms and Special Functions 30 2 1 32 10.1080/10652469.2019.1613988 Search in Google Scholar

Mohammad ZAREBNIA, Reza PARVAZ, & Amir SABOOR BAGHERZADEH. 2018. Deviation of the error estimation for Volterra integro-differential equations. Acta Mathematica Scientia (English Series), 38(4), pp. 1322–1344. ZAREBNIAMohammad PARVAZReza SABOOR BAGHERZADEHAmir 2018 Deviation of the error estimation for Volterra integro-differential equations Acta Mathematica Scientia (English Series) 38 4 1322 1344 10.1016/S0252-9602(18)30817-8 Search in Google Scholar

Shaofei Wu, 2015. A Traffic Motion Object Extraction Algorithm, International Journal of Bifurcation and Chaos, 25(14), pp. 1540039. WuShaofei 2015 A Traffic Motion Object Extraction Algorithm International Journal of Bifurcation and Chaos 25 14 1540039 10.1142/S0218127415400398 Search in Google Scholar

Shaofei Wu, Mingqing Wang, Yuntao Zou. 2018. Research on internet information mining based on agent algorithm, Future Generation Computer Systems, 86, pp. 598–602. WuShaofei WangMingqing ZouYuntao 2018 Research on internet information mining based on agent algorithm Future Generation Computer Systems 86 598 602 10.1016/j.future.2018.04.040 Search in Google Scholar

Shaofei Wu, 2019. Nonlinear information data mining based on time series for fractional differential operators, Chaos, 29, pp. 013114. WuShaofei 2019 Nonlinear information data mining based on time series for fractional differential operators Chaos 29 013114 10.1063/1.5085430 Search in Google Scholar

M. Salai Mathi Selvi, L. Rajendran. 2019. Application of modified wavelet and homotopy perturbation methods to nonlinear oscillation problems, Applied Mathematics and Nonlinear Sciences, 4(2), pp. 351–364. Salai Mathi SelviM. RajendranL. 2019 Application of modified wavelet and homotopy perturbation methods to nonlinear oscillation problems Applied Mathematics and Nonlinear Sciences 4 2 351 364 10.2478/AMNS.2019.2.00030 Search in Google Scholar

Faruk Dusunceli. 2019. New Exact Solutions for Generalized (3+1) Shallow Water-Like (SWL) Equation, Applied Mathematics and Nonlinear Sciences, 4(2), pp. 365–370. DusunceliFaruk 2019 New Exact Solutions for Generalized (3+1) Shallow Water-Like (SWL) Equation Applied Mathematics and Nonlinear Sciences 4 2 365 370 10.2478/AMNS.2019.2.00031 Search in Google Scholar

Shailaja Shirakol, Manjula Kalyanshetti, Sunilkumar M. Hosamani. 2019. QSPR Analysis of certain Distance Based Topological Indices, Applied Mathematics and Nonlinear Sciences, 4(2), pp. 371–386. ShirakolShailaja KalyanshettiManjula HosamaniSunilkumar M. 2019 QSPR Analysis of certain Distance Based Topological Indices Applied Mathematics and Nonlinear Sciences 4 2 371 386 10.2478/AMNS.2019.2.00032 Search in Google Scholar

P. F. Zhou, Q. Fan, J. Zhu, Empirical Analysis on Environmental Regulation Performance Measurement in Manufacturing Industry: A Case Study of Chongqing, Applied Mathematics and Nonlinear Sciences, 2020. 5(1):pp. 25–34. ZhouP. F. FanQ. ZhuJ. Empirical Analysis on Environmental Regulation Performance Measurement in Manufacturing Industry: A Case Study of Chongqing Applied Mathematics and Nonlinear Sciences 2020 5 1 25 34 10.2478/amns.2020.1.00003 Search in Google Scholar

D. K. Josheski, E. Karamazova, M. Apostolov, Shapley-Folkman-Lyapunov theorem and Asymmetric First price auctions, Applied Mathematics and Nonlinear Sciences, 2019. 4(2):pp. 331–350. JosheskiD. K. KaramazovaE. ApostolovM. Shapley-Folkman-Lyapunov theorem and Asymmetric First price auctions Applied Mathematics and Nonlinear Sciences 2019 4 2 331 350 10.2478/AMNS.2019.2.00029 Search in Google Scholar

Empfohlene Artikel von Trend MD

Planen Sie Ihre Fernkonferenz mit Scienceendo