1. bookAHEAD OF PRINT
Dettagli della rivista
License
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese
Accesso libero

A Hybrid Computational Intelligence Method of Newton's Method and Genetic Algorithm for Solving Compatible Nonlinear Equations

Pubblicato online: 15 Jul 2022
Volume & Edizione: AHEAD OF PRINT
Pagine: -
Ricevuto: 18 Apr 2022
Accettato: 12 Jun 2022
Dettagli della rivista
License
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese
Introduction

The quasi-Newton method, like the Steepest Descent Methods, only requires the gradient of the objective function to be known at each iteration. By measuring the change in gradient, construct a model of the objective function that is sufficient to produce superlinear convergence. This class of methods outperforms the steepest descent method by a large margin, especially for hard problems. In addition, since the quasi-Newton method does not require second derivative information, it is sometimes more efficient than Newton's method. Today, optimization software includes a large number of quasi-Newton algorithms for solving unconstrained, constrained, and large-scale optimization problems. The quasi-Newton method is one of the most effective methods for solving nonlinear equations and optimization calculations. It is a kind of Newton-type iterative method that makes each iteration less computationally intensive while maintaining superlinear convergence [1].

Genetic Algorithm (GA) belongs to a branch of evolutionary computing, which was developed in the mid-1960s, it was proposed by John Holland of the University of Michigan in the United States and others, and flourished in the mid-1980s. The genetic algorithm is inspired by the biological evolution of nature, it is found that the essence behind the evolutionary law of “survival competition, survival of the fittest, survival of the fittest” in nature is an optimization idea. Evolutionary Computation (EC) is a general problem solving method developed based on this idea. It uses simple coding techniques to represent various complex structures, and through simple genetic manipulation and natural selection of survival of the fittest on a set of encoded representations, in order to guide learning and determine the direction of the search. Since it organizes the search in a population (ie, a set of coded representations), it makes it possible to search multiple regions in the solution space simultaneously [2]. And the way to organize the search with the population makes the evolutionary algorithm especially suitable for massive parallelism. While endowing evolutionary computing with features such as self-organization, self-adaptation, and self-learning, survival of the fittest by natural selection and simple genetic manipulation, make evolutionary computation unrestricted by its search space (such as emblem, continuous, unimodal, etc.) constraints and do not require other auxiliary information (such as derivatives) characteristics. These characteristics make evolutionary computing not only highly efficient, but also simple, easy to operate, and versatile.

Genetic algorithm has the following characteristics:

Self-organization, self-adaptation and self-learning (intelligence). When applying the genetic algorithm to solve the problem, after the coding scheme, fitness function and genetic operator are determined, algorithms will organize their searches on their own using the information gained during evolution. Based on the selection strategy of “survival of the fittest, and elimination of the unfit”, individuals with high fitness have a higher probability of survival. Natural selection removes one of the biggest hurdles in algorithm design, which is the need to describe all the characteristics of the problem in advance, and it should explain the measures that the algorithm should take according to the different characteristics of the problem. Therefore, the method using genetic algorithm can solve complex unstructured problems [3].

Essential parallelism. Genetic algorithms search for a population number of points in parallel, rather than a single point. Genetic algorithms have implicit parallelism. Since the genetic algorithm uses the population method to organize the search, therefore, multiple regions of the solution space can be searched at the same time, and information can be exchanged with each other. Using this search method, although only a calculation proportional to the population size N is performed each time, approximately O(N3) efficient searches have been performed in essence, this allows the genetic algorithm to obtain greater benefits with less computation.

Genetic algorithm does not need derivation or other auxiliary knowledge, but only needs the objective function and the corresponding fitness function that affects the search direction [4].

Genetic algorithms use genetic operations probabilistically instead of deterministic rules.

Robustness. Since the genetic algorithm uses the fitness of the individual to promote the evolution of the group, regardless of the structural characteristics of the problem to be solved, therefore, when solving different problems, only the corresponding fitness evaluation function needs to be designed, without modifying other parts of the algorithm, it has good robustness.

Genetic algorithm can generate many potential solutions for a given problem, and the final choice can be determined by the user. Genetic algorithms are particularly suitable for multi-objective searches or situations where alternative solution sets need to be determined.

The main applications of genetic algorithms are: Function optimization, combinatorial optimization, production scheduling problems, automatic control, intelligent robot control, image processing and pattern recognition, artificial life, genetic programming, machine learning, etc. [5].

Hassan OF et al. demonstrated the convergence of the Levenbery-Marquardtsuan (LM for short) algorithm, and Fan improved the LM algorithm and proved its convergence [6]. After the later improvement of Newton's method, many research results have appeared. Kumar S proposed the continuous Newton's method, in 2012, the multi-step Newton method was proposed, in which quadrature and variational techniques were applied to study the problem of solving nonlinear equations [7]. In order to improve the convergence of the algorithm, Qureshi U K proposed a class of BFGS-type convergence algorithms with overall and superlinear convergence, in the application of this algorithm to the solution of symmetric nonlinear equations, some scholars have been inspired by the solution of symmetric nonlinear equations, and try to use the quasi-Newton method with faster convergence speed to solve such nonlinear system of equations [8].

Based on the in-depth series of studies on nonlinear equations by predecessors, the author proposes that by embedding the QN operator in the Genetic Algorithm (GA), and define the appropriate fitness, so as to obtain a hybrid computational intelligent algorithm that combines the advantages of GA and QN method, not only has fast convergence, but also can solve CNLE with a large probability.

Problem Description and Quasi-Newton (QN) Method

Newton's method is a numerical method for solving NLE in which the number of equations is consistent with the number of variables, and it has a better solution method with local square convergence. Next, the idea of Newton's method is extended to CNLE (Comenius National Library of Education), where the number of variables and the number of equations can be inconsistent. f(x)=[f1(x)f2(x)fp(x)]T=0 f\left( x \right) = {\left[ {{f_1}\left( x \right){f_2}\left( x \right) \cdots {f_p}\left( x \right)} \right]^T} = 0

The dimension q of the variable vector x may be inconsistent with the equation number p, that is, qp may exist.

Expand the function f(x) shown in formula (1) to the Taylor first-order expansion in the field of point xk, that is, we have f(x)f(xk)+df(xk)dx[xxk] f\left( x \right) \approx f\left( {{x_k}} \right) + {{df\left( {{x_k}} \right)} \over {dx}}\left[ {x - {x_k}} \right] Where df(xk)dx {{df\left( {{x_k}} \right)} \over {dx}} is the gradient direction matrix. The idea of the author's QN is similar to Newton's method. When solving the k + 1 th iteration, let the new iteration value xk+1 be f(xk+1) = 0. Therefore, it can be seen from equation (2) that xk+1 satisfies df(xk)dx[xk+1xk]=f(xk) {{df\left( {{x_k}} \right)} \over {dx}}\left[ {{x_{k + 1}} - {x_k}} \right] = - f\left( {{x_k}} \right)

The number of equations of equation (3) is p, and the number of variables is q. If the system of equations is compatible, the solution of xk+1 may not be unique. From the theory of algebraic equations and the definition of generalized inverse matrices, the general solution of algebraic equation (3) is: xk+1=xk[df(xk)dx]f(xk)(1[df(xk)dx]df(xk)dx)uuRq \matrix{ {{x_{k + 1}} = {x_k} - {{\left[ {{{df\left( {{x_k}} \right)} \over {dx}}} \right]}^ - }f\left( {{x_k}} \right) - \left( {1 - {{\left[ {{{df\left( {{x_k}} \right)} \over {dx}}} \right]}^ - }{{df\left( {{x_k}} \right)} \over {dx}}} \right)u} \hfill \cr {\forall u \in {R^q}} \hfill \cr } Where [df(xk)dx] {\left[ {{{df\left( {{x_k}} \right)} \over {dx}}} \right]^ - } is the generalized inverse of df(xk)dx {{df\left( {{x_k}} \right)} \over {dx}} . According to the generalized inverse matrix theory, [df(xk)dx] {\left[ {{{df\left( {{x_k}} \right)} \over {dx}}} \right]^ - } is calculated as follows [df(xk)dx]={[df(x0)dx]1df(x0)dxistheinvertiblesquarematrixdft(x0)dx[df(x0)dxdft(x0)dx]1df(x0)dxlineofproliferation[dft(x0)dxdft(x0)dx]1dft(x0)dxdf(x0)dxlineofproliferation {\left[ {{{df\left( {{x_k}} \right)} \over {dx}}} \right]^ - } = \left\{ {\matrix{ {{{\left[ {{{df\left( {{x_0}} \right)} \over {dx}}} \right]}^{ - 1}}} \hfill & {{{df\left( {{x_0}} \right)} \over {dx}}\;{\rm{is}}\;{\rm{the}}\;{\rm{invertible}}\;{\rm{square}}\;{\rm{matrix}}} \hfill \cr {{{d{f^t}\left( {{x_0}} \right)} \over {dx}}{{\left[ {{{df\left( {{x_0}} \right)} \over {dx}}{{d{f^t}\left( {{x_0}} \right)} \over {dx}}} \right]}^{ - 1}}} \hfill & {{{df\left( {{x_0}} \right)} \over {dx}}\;{\rm{line}}\;{\rm{of}}\;{\rm{proliferation}}} \hfill \cr {{{\left[ {{{d{f^t}\left( {{x_0}} \right)} \over {dx}}{{d{f^t}\left( {{x_0}} \right)} \over {dx}}} \right]}^{ - 1}}{{d{f^t}\left( {{x_0}} \right)} \over {dx}}} \hfill & {{{df\left( {{x_0}} \right)} \over {dx}}\;{\rm{line}}\;{\rm{of}}\;{\rm{proliferation}}} \hfill \cr } } \right.

It can be seen from the above formula and formula (4) that when the number of equations of CNE is consistent with the number of variables and df(x0)dx {{df\left( {{x_0}} \right)} \over {dx}} is an invertible square matrix, the QN method is Newton's method [9]. Combining the above ideas, the calculation steps of the QN method of CNLE (1) can be obtained as follows:

Step 1 Select the initial iteration value x0.

Step 2 Calculate the iteration value 1 according to the following formula. x¯1=x0[df(x0)dx]f(x0) {\bar x_1} = {x_0} - {\left[ {{{df\left( {{x_0}} \right)} \over {dx}}} \right]^ - }f\left( {{x_0}} \right)

Step 3 If 1 makes ‖f(1)‖ ≥ ‖f(x0)‖, the following correction value of 1 is used as the new iteration value x1=λx0+(1λ){x¯1(I[df(x0)dx]df(x0)dx)u}0λ1,uRq \matrix{ {{x_1} = {\lambda _{x0}} + \left( {1 - \lambda } \right)\left\{ {{{\bar x}_1} - \left( {I - {{\left[ {{{df\left( {{x_0}} \right)} \over {dx}}} \right]}^ - }{{df\left( {{x_0}} \right)} \over {dx}}} \right)u} \right\}} \hfill \cr {0 \le \lambda \le 1,\forall u \in {R^q}} \hfill \cr } Where λ is the downhill factor. The function of the above correction formula is to ensure that the function value f(x1) used in the iterative process decreases monotonically.

Step 4 If the iteration value satisfies the termination iteration condition δ < ɛ1 or ‖f(x1)‖ < ɛ2, terminate the iteration, and take x1 as the solution of CNLE (1), otherwise, go to step 5. Here ɛ1 and ɛ2 are allowable errors, and δ={x1x0whenx1<Cx1x0x1whenx1C \delta = \left\{ {\matrix{ {\left\| {{x_1} - {x_0}} \right\|} \hfill & {when\left\| {{x_1}} \right\| < C} \hfill \cr {{{\left\| {{x_1} - {x_0}} \right\|} \over {\left\| {{x_1}} \right\|}}} \hfill & {when\left\| {{x_1}} \right\| \ge C} \hfill \cr } } \right. Among them is the control number of absolute error or relative error, generally C = 1 can be taken.

Step 5 If the iteration does not reach the predetermined number of times, replace x0 with x1, return to step 2 to continue the iteration, otherwise the method fails. The reason for the failure may be that the initial value is not chosen properly, or that CNLE (1) has no solution (ie, it is incompatible).

Similar to the convergence analysis of Newton's method, it can be seen that the QN method has the same local square convergence as Newton's method. Therefore, QN is a very efficient CNLE solution on a local scale, but less effective for strongly nonlinear CNLEs. For this reason, it is more meaningful to study the solution master of strong nonlinear CNLE with high solving efficiency [10].

Hybrid Computational Intelligence Algorithm (HCIA) for Solving Nonlinear System of Equations
f(x)=[f1(x)f2(x)fp(x)]T=0aixibi \matrix{ {f\left( x \right) = {{\left[ {{f_1}\left( x \right){f_2}\left( x \right) \ldots {f_p}\left( x \right)} \right]}^T} = 0} \hfill \cr {{a_i} \le {x_i} \le {b_i}} \hfill \cr }

In order to generalize the GA for global optimization to solving CNLE (8), the solving problem needs to be equivalent to the following optimization problem minaixibiF(x)=minaixibij=1pcj|fj(x)|cj>0 \matrix{ {\mathop {\min }\limits_{{a_i} \le {x_i} \le {b_i}} F\left( x \right) = \mathop {\min }\limits_{{a_i} \le {x_i} \le {b_i}} \sum\limits_{j = 1}^p {{c_j}\left| {{f_j}\left( x \right)} \right|} } \hfill \cr {{c_j} > 0} \hfill \cr }

The parameter cj is the weighting coefficient. Obviously, the solution of CNLE (8) is equivalent to the global optimization solution of optimization problem (9).

In view of the shortcomings of the QN method and GA in solving CNLE problems, and synthesizing their respective advantages, the author proposes the following HCIA Begin

Randomly generate the initial parent population and calculate its fitness

Repeat

Select two individuals in the group to perform coding hybridization and decoding operations with probability pc, and add both the parent and the offspring to the offspring population;

Perform encoding, mutation and decoding operations on each individual in the group with probability pm, and add both the parent and the offspring to the offspring population;

Perform the QN operator operation on each individual in the group with probability pn, and use the new iteration value as a child to replace the parent and join the child group;

Calculate fitness for each individual;

With a given group size, select the progeny group for the next breeding (the fittest breed, the unfit are eliminated)

Until (end iteration logic condition)

End

For the above HCIA, the description is as follows:

Fitness Definition

Since the selection probability required for hybridization and offspring selection is proportional to fitness, the definition of fitness has a global impact on GA and HCLA, which is one of the keys to the successful solution of the algorithm [11]. Fitness is defined as g(x)=FmaxF(x)+k1(FtmaxFtmin)k1>0 \matrix{ {g\left( x \right) = {F_{\max }} - F\left( x \right) + {k_1}\left( {{F_{t\max }} - {F_{t\min }}} \right)} \hfill \cr {{k_1} > 0} \hfill \cr } Where Ftmax and Ftmin are the maximum and minimum function values of F(x) in the current population, respectively; k1 is a parameter that controls the ratio of maximum to minimum fitness. It can be seen from the above formula that the ratio of the maximum to minimum fitness in each generation is 1+k1k1 {{1 + {k_1}} \over {{k_1}}} . If k1 is too large, the ratio approaches 1, that is to say, the individuals with the largest and smallest function values in the current population have relatively close fitness and selection probability, and the functions of the hybridization operator and the mutation operator will degenerate to close to a complete random search. If k1 is too small, the ratio will approach ∞, which will make the fitness and selection probability of individuals with larger function values in the current population too small, losing the chance of reproduction, which leads to the rapid concentration of the group to one or several individuals, reduces the probability of successful solution. In general, the value of k1 is suitable between 0.01 and 0.1, and the ratio of the maximum and minimum fitness between individuals in the same generation group is between 11 and 101.

Since the Fmax and Fmin of each generation in the algorithm change with different groups, HCIA is highly adaptive and robust in determining fitness and selection probability [12].

Data structure and coding method

The main data structure of GA is a digital bit string, which is commonly used in binary. When GA is applied to solve the equation system, the length of the bit string and the encoding method have a decisive influence on the calculation accuracy and the diversity of individuals in the population, and directly affect the solution.

The data structure of HCIA adopts a hybrid data structure, and each individual in the group adopts a data structure of real numbers, only when individuals are identified for crossover and mutation operations, the binary bit string encoding in the traditional GA is carried out, after the operation, the offspring will be decoded to be added to the new offspring group. The advantage of using the hybrid data structure is that it can avoid the influence of the limited word length of the encoding on the accuracy, and give full play to the high accuracy of the QN operator and the ability to search for the solution of the equation locally and meticulously.

Quasi-Newton (QN) operator

In order to accelerate the convergence of GA in CNLE solution, and take advantage of traditional numerical algorithms in calculation speed and accuracy, a QN operator that mainly performs one-step iterative operation in the QN method is embedded in HCLA. For the new offspring produced in each reproduction, the probability pn is used to determine whether the iterative operation of the QN operator is required [13].

The specific operations performed by the QN operator are:

Repeat

Generate a random number between [0, 1] for the i-th individual in the group. If the random number is greater than the predetermined QN operator probability pn, the following operations are performed: Calculatedf(ki)dxand[df(ki)dx],andx¯k+1=xk[df(xk)dx]f(xk) {\rm{Calculate}}\;{{df\left( {{k_i}} \right)} \over {dx}}\;{\rm{and}}\;{\left[ {{{df\left( {{k_i}} \right)} \over {dx}}} \right]^ - },\;{\rm{and}}\;{\bar x_{k + 1}} = {x_k} - {\left[ {{{df\left( {{x_k}} \right)} \over {dx}}} \right]^ - }f\left( {{x_k}} \right)

The downhill factor λ is appropriately selected so that the new iteration value xk+1 calculated as follows, make the function F(x) monotonically decreasing xk+1=λxk+(1λ){x¯k+1(I[df(xk)dx]df(xk)dx)u}0λ1,uRq \matrix{ {{x_{k + 1}} = \lambda {x_k} + \left( {1 - \lambda } \right)\left\{ {{{\bar x}_{k + 1}} - \left( {I - {{\left[ {{{df\left( {{x_k}} \right)} \over {dx}}} \right]}^ - }{{df\left( {{x_k}} \right)} \over {dx}}} \right)u} \right\}} \hfill \cr {0 \le \lambda \le 1,u \in {R^q}} \hfill \cr }

The generated offspring will replace the parent and join the new offspring group;

Until (search the entire population).

It can be said that the functions of HCIA's genetic operator - hybrid operator, mutation operator and selection operator are macro search, dealing with large-scale search problems, the function of the iterative operation process in the QN operator is local search, that is, microscopic search, which deals with small-scale search and search acceleration.

The size of the QN operator probability pn should ensure that in the reproduction (iteration) process, each individual in the group has the opportunity to obtain a certain number of QN operator operations.

Therefore, the factors to be considered in determining the value of pn are:

Iterative convergence of the QN method of the solved CNLE. If the iterative convergence is faster, the corresponding pn can be smaller, otherwise it is larger.

A predetermined number of reproductions (iterations). If the predetermined breeding algebra is larger, the corresponding pn can be smaller, otherwise it can be larger.

Selection operator, selection probability and distance between control individuals

Compared with the traditional numerical algorithm, the reason why GA can obtain the corresponding solution in the global scope with a large probability, it is because two basic principles are reflected in its algorithm thought: One is survival of the fittest, and the other is the relative randomness of selection. The selection operator, selection probability and controlling the distance between individuals are the most important means to embody these principles. HCIA's main considerations for this are:

After the hybridization and mutation operations, both the offspring and the parent enter the offspring candidate population at the same time. Since the QN method can guarantee that the point sequence generated by iteration is monotonically decreasing, therefore, the new individuals generated by the QN operator inherit the good qualities of the parent, so the offspring can replace the parent into the offspring candidate group, without the need for both the parent and offspring to enter the offspring candidate group.

When selecting the progeny population for next breeding, randomly select two individuals with the probability of individuals in the candidate population each time, and the individuals with large fitness enter the progeny population, until all the offspring needed for reproduction have been produced. This can not only ensure the relative survival of the fittest, but also make the selection have a certain randomness [14].

In order to avoid the rapid concentration of the group to a certain or a limited number of individuals during reproduction, reduce the probability of a successful solution, in the selection operator operation, the distance between each individual in the group must also be calculated (commonly used are 2-norm, Hamming distance, etc.), two individuals that are close to each other must be eliminated. Since the QN operator in HCIA can perform optimization searches locally and meticulously, when controlling the distance between individuals in a group, it can be considered to make the relative distance between individuals larger (generally, the relative error can be more than 10%), it enables the solution to be successful with a greater probability in a larger spatial range.

Numerical calculations

This section discusses the solution of the following three equations in the specified interval.

Example 1

f(x)=(x15x2)2+(x22x3)2+(3x1+x3)2(1xi1) \matrix{ {f\left( x \right) = {{\left( {{x_1} - 5{x_2}} \right)}^2} + {{\left( {{x_2} - 2{x_3}} \right)}^2} + {{\left( {3{x_1} + {x_3}} \right)}^2}} \hfill \cr {\left( { - 1 \le {x_i} \le 1} \right)} \hfill \cr }

Example 2

f(x)=(x15x2)2+(x22x3)2+(3x1+x3)2+40[sin2(10x1)+sin2(10x2)+sin2(10x3)](1xi1) \matrix{ {f\left( x \right) = {{\left( {{x_1} - 5{x_2}} \right)}^2} + {{\left( {{x_2} - 2{x_3}} \right)}^2} + {{\left( {3{x_1} + {x_3}} \right)}^2} + 40\left[ {{{\sin }^2}\left( {10{x_1}} \right) + {{\sin }^2}\left( {10{x_2}} \right) + {{\sin }^2}\left( {10{x_3}} \right)} \right]} \hfill \cr {\left( { - 1 \le {x_i} \le 1} \right)} \hfill \cr }

Example 3

{f1(x)=(x15x2)2+40sin2(10x3)f2(x)=(x22x3)2+40sin2(10x1)f3(x)=(3x1x3)2+40sin2(10x2)(1x11) \matrix{ {\left\{ {\matrix{ {{f_1}\left( x \right) = {{\left( {{x_1} - 5{x_2}} \right)}^2} + 40{{\sin }^2}\left( {10{x_3}} \right)} \hfill \cr {{f_2}\left( x \right) = {{\left( {{x_2} - 2{x_3}} \right)}^2} + 40{{\sin }^2}\left( {10{x_1}} \right)} \hfill \cr {{f_3}\left( x \right) = {{\left( {3{x_1} - {x_3}} \right)}^2} + 40{{\sin }^2}\left( {10{x_2}} \right)} \hfill \cr } } \right.} \hfill \cr {\left( { - 1 \le {x_1} \le 1} \right)} \hfill \cr }

The author uses the QN method and HCIA proposed above to carry out numerical calculation, and the weighting coefficient cj in formula (9) is all taken as 1. Since the number of equations and variables in Example 3 are both 3, when solving this example, the QN method is actually the traditional Newton method. In the calculation, 1500 groups of 3-dimensional random vectors between [−1, 1] are randomly generated, and each 30 groups are used as the initial breeding population, and a total of 50 calculations are performed. Considering the factors of GA and HCLA code length, for example 1, if F(x) < 10−6, it is considered that the solution is found and the algorithm is successful; For example 2 and example 3, if F(x) < 10−5, the search is considered successful. The calculation results are shown in Table 1 and Table 2.

Calculation results of Example 1

Quasi-Newton methodPc = Pm = 0 Pn = 1 Hybrid Computational Intelligence Algorithm(Pc = 0.9, Pm = 0.1)
Pn = 0 0.005 0.01 0.025 0.05 0.075 0.1 0.2
Number and probability of finding a solution 601.2 460.92 601.2 601.2 601.2 601.2 601.2 601.2 601.2
reproductive algebra 2500 10000 2500 2500 2500 2500 2500 2500 2500

Calculation results of Example 2 and Example 3

Example 2 Example 3 reproductive algebra
Number and probability of finding a solution Number and probability of finding a solution
Quasi-Newton methodPc = Pm = 0Pn = 1 40.08 30.06 2500
Hybrid Computational Intelligence Algorithm(Pc = 0.9, Pm = 0.1) Pn=0 60.13 60.12 2500
0.25 170.44 200.4 2500
0.50 210.54 310.62 2500
0.75 310.62 330.67 2500
0.1 390.68 360.7 2500
0.15 450.92 520.74 2500
0.2 480.98 500.94 2500
0.3 601.2 601.2 2500
0.4 601.2 601.2 2500
0.5 601.2 601.2 2500

The above calculation results are explained and analyzed as follows:

When Pc = Pm = 0, Pn = 1 and HCIA are equivalent to the QN method that searches simultaneously at 30 initial iteration points, the above three examples of QN method are solved in this way. It can be seen from the solution results that the solutions of the three cases are all (0, 0, 0). The QN method of Example 1 can all converge to its solution, however, because of the non-monotonic decline of the function fi(x), there are many local extreme points, which can not converge to the solution and fail in Examples 2 and 3, and the solution efficiency is low.

When Pn = 0, HCIA degenerates into GA, the GA solution of the above three cases is carried out in this way. For CNLE composed of monotonic functions similar to Example 1, within the specified accuracy range and reproduction algebra, the probability of GA obtaining a solution is lower than that of the QN method and it takes more computing time (cost); For CNLE composed of functions similar to Example 2 and Example 3, which are non-monotonic and have multiple local extreme points, the probability of obtaining a solution is higher than that of the QN method [15].

The HCIA proposed by the author, which combines the specialties of GA and QN method, can give full play to the genetic operator and mutation operator to search for global solutions in a large range, and the characteristic of QN operator in local and meticulous search, for CNLE composed of functions similar to Example 2 and Example 3, which are non-monotonic and have multiple local extreme points, the probability of finding a solution is significantly higher than that of the QN method and GA; For CNLE composed of monotonic functions similar to Example 1, the probability of obtaining a solution is equivalent to that of the QN method. The value of the selection probability pn of the QN operator also directly affects the solution efficiency. Generally speaking, for strong nonlinear CNLE composed of multimodal functions, pn can be larger; For weakly nonlinear CNLE composed of functions with fewer extreme points and stronger monotonicity, pn can be smaller. Figure 1 and Figure 2 are respectively for Example 2 and Example 3 under different pn values, plot of the probability of finding a solution versus pn.

Figure 1

The relationship between the probability of the solution and pn when calculating example 2 with the hybrid computing intelligent algorithm

Figure 2

The relationship between the probability of the solution and pn when calculating example 3 using the hybrid computing intelligent algorithm

From the above calculation results, it can be seen that, the HCLA proposed by the author based on GA and QN method, the probability of obtaining a solution in the specified space is significantly greater than that of QN method and GA, and it is a feasible numerical algorithm for CNLE [16].

Conclusion

The author combines the special HCIA of GA and Newton's method, which can give full play to the characteristics of genetic operators in large-scale search for global solutions, and the characteristic of Newton operator in local and meticulous search, for the nonlinear equation system composed of non-monotonic multimodal functions similar to Example 2, the probability of obtaining the solution is significantly higher than that of Newton's method and GA; For the nonlinear system of equations composed of monotonic functions similar to those in Example 1, the probability of obtaining a solution is equivalent to that of Newton's method. The size of the selection probability of the Newton operator also directly affects the solution efficiency. Generally speaking, can be larger for strong nonlinear equations composed of multimodal functions; can be smaller for weak nonlinear equations composed of functions with few extreme points and strong monotonicity.

Figure 1

The relationship between the probability of the solution and pn when calculating example 2 with the hybrid computing intelligent algorithm
The relationship between the probability of the solution and pn when calculating example 2 with the hybrid computing intelligent algorithm

Figure 2

The relationship between the probability of the solution and pn when calculating example 3 using the hybrid computing intelligent algorithm
The relationship between the probability of the solution and pn when calculating example 3 using the hybrid computing intelligent algorithm

Calculation results of Example 2 and Example 3

Example 2 Example 3 reproductive algebra
Number and probability of finding a solution Number and probability of finding a solution
Quasi-Newton methodPc = Pm = 0Pn = 1 40.08 30.06 2500
Hybrid Computational Intelligence Algorithm(Pc = 0.9, Pm = 0.1) Pn=0 60.13 60.12 2500
0.25 170.44 200.4 2500
0.50 210.54 310.62 2500
0.75 310.62 330.67 2500
0.1 390.68 360.7 2500
0.15 450.92 520.74 2500
0.2 480.98 500.94 2500
0.3 601.2 601.2 2500
0.4 601.2 601.2 2500
0.5 601.2 601.2 2500

Calculation results of Example 1

Quasi-Newton methodPc = Pm = 0 Pn = 1 Hybrid Computational Intelligence Algorithm(Pc = 0.9, Pm = 0.1)
Pn = 0 0.005 0.01 0.025 0.05 0.075 0.1 0.2
Number and probability of finding a solution 601.2 460.92 601.2 601.2 601.2 601.2 601.2 601.2 601.2
reproductive algebra 2500 10000 2500 2500 2500 2500 2500 2500 2500

Yamadjako A E, Adomou A, Yélomè J. F. Kpomahou, et al. Soliton-Like Spherical Symmetric Solutions to the Electromagnetic and Scalar Nonlinear Induction Field Equations in the General Relativity Theory[J]. Journal of High Energy Physics, Gravitation and Cosmology, 2022, 8(1):17. YamadjakoA E AdomouA YélomèJ KpomahouF Soliton-Like Spherical Symmetric Solutions to the Electromagnetic and Scalar Nonlinear Induction Field Equations in the General Relativity Theory [J]. Journal of High Energy Physics, Gravitation and Cosmology 2022 8 1 17 10.4236/jhepgc.2022.81011 Search in Google Scholar

Candelario G, Cordero A, Torregrosa J R, et al. An optimal and low computational cost fractional Newton-type method for solving nonlinear equations[J]. Applied Mathematics Letters, 2021, 124(1):107650. CandelarioG CorderoA TorregrosaJ R An optimal and low computational cost fractional Newton-type method for solving nonlinear equations [J] Applied Mathematics Letters 2021 124 1 107650 10.1016/j.aml.2021.107650 Search in Google Scholar

Begiato R G, AL Custódio, Gomes-Ruggiero M A. A global hybrid derivative-free method for high-dimensional systems of nonlinear equations[J]. Computational Optimization and Applications, 2020, 75(1):93–112. BegiatoR G CustódioAL Gomes-RuggieroM A A global hybrid derivative-free method for high-dimensional systems of nonlinear equations [J] Computational Optimization and Applications 2020 75 1 93 112 10.1007/s10589-019-00149-y Search in Google Scholar

Peric M, Ilic S, Vuckovic A, et al. Improving the Efficiency of Hybrid Boundary Element Method for Electrostatic Problems Solving[J]. Applied Computational Electromagnetics Society Journal, 2020, 35(8):872–877. PericM IlicS VuckovicA Improving the Efficiency of Hybrid Boundary Element Method for Electrostatic Problems Solving [J] Applied Computational Electromagnetics Society Journal 2020 35 8 872 877 10.47037/2020.ACES.J.350804 Search in Google Scholar

Alamdar A, Samandi P, Hanifeh S, et al. Investigation of a Hybrid Kinematic Calibration Method for the Sina Surgical Robot[J]. IEEE Robotics and Automation Letters, 2020, 5(4):5276–5282. AlamdarA SamandiP HanifehS Investigation of a Hybrid Kinematic Calibration Method for the Sina Surgical Robot [J] IEEE Robotics and Automation Letters 2020 5 4 5276 5282 10.1109/LRA.2020.3007466 Search in Google Scholar

Hassan O F, Jamal A, Abdel-Khalek S. Genetic algorithm and numerical methods for solving linear and nonlinear system of equations: a comparative study[J]. Journal of Intelligent and Fuzzy Systems, 2019, 38(3):1–6. HassanO F JamalA Abdel-KhalekS Genetic algorithm and numerical methods for solving linear and nonlinear system of equations: a comparative study [J] Journal of Intelligent and Fuzzy Systems 2019 38 3 1 6 10.3233/JIFS-179572 Search in Google Scholar

Kumar S, Sharma J R. A family of derivative-free methods for solving nonlinear equations[J]. ANNALI DELL'UNIVERSITA' DI FERRARA, 2021, 67(2):355–367. KumarS SharmaJ R A family of derivative-free methods for solving nonlinear equations [J] ANNALI DELL'UNIVERSITA' DI FERRARA 2021 67 2 355 367 10.1007/s11565-021-00377-3 Search in Google Scholar

Qureshi U K, Shaikhi A A, Shaikh F K, et al. New Simpson type method for solving nonlinear equations[J]. Open Journal of Mathematical Sciences, 2021, 5(1):94–100. QureshiU K ShaikhiA A ShaikhF K New Simpson type method for solving nonlinear equations [J] Open Journal of Mathematical Sciences 2021 5 1 94 100 10.30538/oms2021.0148 Search in Google Scholar

Jafari R, Jafarian A. A new computational method for solving fully fuzzy nonlinear matrix equations[J]. International Journal of Fuzzy Computation and Modelling, 2019, 2(4):275–. JafariR JafarianA A new computational method for solving fully fuzzy nonlinear matrix equations [J] International Journal of Fuzzy Computation and Modelling 2019 2 4 275 0 10.1504/IJFCM.2019.100317 Search in Google Scholar

Yamadjako A E, Adomou A, Yélomè J. F. Kpomahou, et al. Exact Static Plane Symmetric Soliton-Like Solutions to the Nonlinear Interacting Electromagnetic and Scalar Field Equations in General Relativity[J]. Journal of High Energy Physics, Gravitation and Cosmology, 2022, 8(1):14. YamadjakoA E AdomouA YélomèJ KpomahouF Exact Static Plane Symmetric Soliton-Like Solutions to the Nonlinear Interacting Electromagnetic and Scalar Field Equations in General Relativity [J]. Journal of High Energy Physics, Gravitation and Cosmology 2022 8 1 14 10.4236/jhepgc.2022.81012 Search in Google Scholar

Amer Y A, El-Sayed A T, Ahmed E E E. Vibration reduction of vertical conveyor system via negative cubic velocity feedback under external and parametric excitations[J]. Journal of Mechanical Science and Technology, 2022, 36(2):543–551. AmerY A El-SayedA T AhmedE E E Vibration reduction of vertical conveyor system via negative cubic velocity feedback under external and parametric excitations [J] Journal of Mechanical Science and Technology 2022 36 2 543 551 10.1007/s12206-022-0103-0 Search in Google Scholar

Shilpa S, Pai D D, Michael M, et al. Shocks and solitons in collisional dense laser produced plasmas[J]. Physica Scripta, 2022, 97(4):045601 (13pp). ShilpaS PaiD D MichaelM Shocks and solitons in collisional dense laser produced plasmas [J] Physica Scripta 2022 97 4 045601 (13pp). 10.1088/1402-4896/ac5665 Search in Google Scholar

Hashim K H, Shiker M A K. Using a new line search method with gradient direction to solve nonlinear systems of equations[J]. Journal of Physics: Conference Series, 2021, 1804(1):012106 (9pp). HashimK H ShikerM A K Using a new line search method with gradient direction to solve nonlinear systems of equations [J] Journal of Physics: Conference Series 2021 1804 1 012106 (9pp). 10.1088/1742-6596/1804/1/012106 Search in Google Scholar

Arslan D. The Comparison Study of Hybrid Method with RDTM for Solving Rosenau-Hyman Equation[J]. Applied Mathematics and Nonlinear Sciences, 2020, 5(1):267–274. ArslanD The Comparison Study of Hybrid Method with RDTM for Solving Rosenau-Hyman Equation [J] Applied Mathematics and Nonlinear Sciences 2020 5 1 267 274 10.2478/amns.2020.1.00024 Search in Google Scholar

Lhan E, Kymaz O. A generalization of truncated M-fractional derivative and applications to fractional differential equations[J]. Applied Mathematics and Nonlinear Sciences, 2020, 5(1):171–188. LhanE KymazO A generalization of truncated M-fractional derivative and applications to fractional differential equations [J] Applied Mathematics and Nonlinear Sciences 2020 5 1 171 188 10.2478/amns.2020.1.00016 Search in Google Scholar

El-Borhamy M, Mosalam N. On the existence of periodic solution and the transition to chaos of Rayleigh-Duffing equation with application of gyro dynamic[J]. Applied Mathematics and Nonlinear Sciences, 2020, 5(1):93–108. El-BorhamyM MosalamN On the existence of periodic solution and the transition to chaos of Rayleigh-Duffing equation with application of gyro dynamic [J] Applied Mathematics and Nonlinear Sciences 2020 5 1 93 108 10.2478/amns.2020.1.00010 Search in Google Scholar

Articoli consigliati da Trend MD

Pianifica la tua conferenza remota con Sciendo