1. bookAHEAD OF PRINT
Dettagli della rivista
License
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese
Accesso libero

Optimization of Linear Algebra Core Function Framework on Multicore Processors

Pubblicato online: 15 Jul 2022
Volume & Edizione: AHEAD OF PRINT
Pagine: -
Ricevuto: 13 Apr 2022
Accettato: 10 Jun 2022
Dettagli della rivista
License
Formato
Rivista
eISSN
2444-8656
Prima pubblicazione
01 Jan 2016
Frequenza di pubblicazione
2 volte all'anno
Lingue
Inglese
Introduction

The multi-core processor (FPGI for short) is a new type of high-performance computing system developed by PMC Corporation in the U.S. Its most important feature is that it can achieve high accuracy, high speed and low power consumption, and its core functions mainly include data set, state space and data compression[1]. In this chip design, researchers will conduct a comparative analysis under different parameters. Firstly, the relationship between the indicators will be understood; secondly, the related factors such as the mutual influence and force between the performance indicators will be understood; finally, the optimal control variables will be obtained by comparison, and the best optimization scheme will be developed according to these results so as to achieve the expected target requirements, and the optimization scheme of the multi-core processor will be finally determined[2].

Multi-core type design is a computer-based bionics, which uses computers for data processing to operate instead of the human brain, so as to accomplish tasks such as analysis, modeling and optimization of complex systems. At present, many companies at home and abroad have started to study and apply multi-channel development technology[3,4]. It has a strong data acquisition capability and high-speed computing speed, and it has been widely used in the military field. Microcontroller can run stably in various types of media environment, which makes it become one of the current research hotspots; at the same time, as the technology continues to develop and deepen, people's requirements for multi-core processor performance are getting higher and more difficult gradually. Therefore, how to optimize the design to meet the high performance and low power consumption is the current problem to be solved, and the core function of the multi-core processor as an important parameter, in the design and optimization process need to consider its stability, high reliability and low complexity[5].

Multi-core processor mapping refers to the operation on input variables with certain mathematical step functions, instead of directly mapping the input objects containing characteristic variables such as unknowns and value domains to the corresponding objects[6]. The core functions based on multi-core processors are nonlinear integers, which are stochastic in nature and require consideration of the complexity of the relationship between objects during the solution process. It is commonly used in solving the problem: first find a parameter (i.e., solution space); then calculate the optimal feasible solution according to the required obtained value. However, it can be optimized by using different types of multicore processors to achieve the best results due to different practical application environments and strict requirements[7,8]. The function is a vector structure composed of real and complex numbers, so the optimization of the function is to find the optimal solution after considering these parameters together[9].

Linear function algorithms on multi-core processors

1) Vector operations: multiplication, division and square root operations can be expressed as a single general operation as shown in Equation (1): VEC=(xi(*)yip(+)zi),i{0,1,2,3},(*){*,/},(+){+,},p{0,5,1}. \matrix{{VEC = \left({{x_i}\left(* \right)y_i^p\left(+ \right){z_i}} \right),i \in \left\{{0,1,2,3} \right\},} \hfill \cr {\left(* \right) \in \left\{{*,/} \right\},\left(+ \right) \in \left\{{+, -} \right\},p \in \left\{{0,5,1} \right\}.} \hfill \cr}

The expression can be converted as shown in Equation (2): VEC=(2log2xi(+)((log2yi>>q))(+)zi),i{0,1,2,3},(+){+,},q{0,1} \matrix{{VEC = \,\left({{2^{{{\log}_2}\,{x_i}\left(+ \right)\left({\left({{{\log}_{2\,}}{y_i} > > q} \right)} \right)}}\left(+ \right){z_i}} \right),} \hfill \cr {i \in \left\{{0,1,2,3} \right\},\left(+ \right) \in \left\{{+, -} \right\},q \in \left\{{0,1} \right\}} \hfill \cr}

2) Trigonometric operations: According to the structure of VEC above, trigonometric functions, hyperbolas and their corresponding inverse trigonometric functions and inverse hyperbolas can be realized[10]. Here only the operations of sine, cosine and tangent functions are performed, which are calculated by Taylor series expansion of trigonometric functions, as shown in Equation (3): TRG=c0xk0(+)c1xk1(k)c2xk2,(+)c3xk3(+)c4xk4,(+){+,}, \matrix{{TRG = {c_0}{x^{{k_0}}}\left(+ \right){c_1}{x^{{k_1}}}\left(k \right){c_2}{x^{{k_2}}},} \hfill \cr {\left(+ \right){c_3}{x^{{k_3}}}\left(+ \right){c_4}{x^{{k_4}}},\left(+ \right) \in \left\{{+, -} \right\},} \hfill \cr}

The expressions can be converted as shown in Equation (4): VEC=(2log2xi(+)((log2yi>>q))(+)zi),i{0,1,2,3},(+){+,},q{1,0} \matrix{{VEC = \,\left({{2^{{{\log}_2}\,{x_i}\left(+ \right)\left({\left({{{\log}_{2\,}}{y_i} > > q} \right)} \right)}}\left(+ \right){z_i}} \right),} \hfill \cr {i \in \left\{{0,1,2,3} \right\},\left(+ \right) \in \left\{{+, -} \right\},q \in \left\{{1,0} \right\}} \hfill \cr}

3) Power exponential and logarithmic operations: Power operations can also be converted into multiplication operations in the logarithmic region as shown in Equation (5): POW=xy=2(y*log2x) POW = {x^y} = {2^{\left({y*{{\log}_2}\,x} \right)}}

The logarithm based on an arbitrary base can be calculated by converting Equation (6) into a multiplication method: LOG=logbx=k*log2x,k=1/log2b. \matrix{{LOG = \,{{\log}_b}\,x = k*{{\log}_2}\,x,} \hfill \cr {k = 1/{{\log}_2}\,b.} \hfill \cr}

4) Logarithmic conversion: Any fixed-point number α can be approximated by log2X, so that the operation of any two numbers can be converted into a logarithmic operation, which reduces the complexity of the calculation[11]. Any 32-bit binary fixed-point logarithm x can be expressed as shown in Equation (7): x=zkzk1z2z1z0,z1z2z3zj,x=i=jk2izi,zi=0/1 \matrix{{x = {z_k}{z_{k - 1}} \cdots {z_2}{z_1}{z_0},\,{z_{- 1}}\,{z_{- 2}}\,{z_{- 3}} \cdots {z_j},} \hfill \cr {x = \sum\nolimits_{i = j}^k {{2^i}{z_i},{z_i} = 0/1}} \hfill \cr}

If the highest bit Zk is 1, x can be expressed as shown in Equation (8): x=2k(1+f),f=i=jk12ikzi,0f1 \matrix{{x = {2^k}\left({1 + f} \right),} \hfill \cr {f = \sum\nolimits_{i = j}^{k - 1} {{2^{i - k}}{z_i},}} \hfill \cr {0 \le f \le 1} \hfill \cr}

Its logarithm can then be expressed as shown in Equation (9): log2x=k+log2(1+f) {\log _2}x = k + {\log _2}\left({1 + f} \right) k is the integer part of the logarithm and log2(1+f) is its trailing part in the range [0,1). In the method of segmented linear approximation, the nonlinear group log2(1+f) can be approximated as shown in Equation (10). log2(1+f)ai*f+bi {\log _2}\left({1 + f} \right) \cong {a_i}*f + {b_i}

Based on the area and power consumption considerations, an optimal coefficient is chosen and Equation (10) can be written as (11): ai*f+bif+[invi0(f)>>pi]+[invi1(f)>>qi]+[invi1(f)>>ri]+biΔ__log2(1+f). \matrix{{{a_i}*f + {b_i} \cong f + \left[{inv_i^0\left(f \right) > > {p_i}} \right] +} \hfill \cr {\left[{inv_i^1\left(f \right) > > {q_i}} \right] + \left[{inv_i^1\left(f \right) > > {r_i}} \right] +} \hfill \cr {{b_i} \buildrel \Delta \over = {{\log}_2}\left({1 + f} \right).} \hfill \cr} ai is calculated from the sum of the shift results, invij inv_i^j is a bit-flip function, and pi, qi and riv are the number of shifts defined for each approximation region i. Therefore, log2X can be expressed as shown in Equation (12). log2x=k+log2(1+f) {\log _2}\,x = k + {\log _2}\left({1 + f} \right)

5) Opposite number conversion: Opposite number conversion converts the result of the logarithmic part of the calculation into a fixed number of principle as follows:

x is a logarithmic value, and x can be expressed as shown in equation (13). x=k+f,0f1. x = k + f,\,0 \le f \le 1. k and f denote the integer and trailing digits of x, respectively. Therefore, the inverse number conversion of x can be expressed as shown in Equation (14). 2x=2k*2f {2^x} = {2^k}*{2^f}

In the above equation, it is just a shift operation, and the nonlinearity needs to be approximated linearly by segments, as shown in equation (15). 2fai*f+bi {2^f} \cong {a_i}*f + {b_i}

In the above equation, both ai and bi are approximation coefficients defined inside each approximation region i, 0 ≤ f ≤ 1. Since the error of the inverse number conversion basically spans the whole region, f is divided into 8 approximation regions, and Equation (15) can be expressed as Equation (16): ai*f+bif+[invi0(f)>>pi]+[invi1(f)>>qi]+bΔ__2f \matrix{{{a_i}*f + {b_i} \cong f + \left[{inv_i^0\left(f \right) > > {p_i}} \right] +} \hfill \cr {\left[{inv_i^1\left(f \right) > > {q_i}} \right] + b \buildrel \Delta \over = {2^f}} \hfill \cr}

SMT-PAAG consists of 16 processing units (Processing El-ement, PE) interconnected to form a 4 × 4 two-dimensional array, also includes a front-end processor, four co-processors, two schedulers and two storage management, the overall structure of the system is shown in Figure 1[12,13].

Figure 1

The overall structure of the system

The connection between the coprocessor (ACCL) and the surrounding processing units (PEs) is shown in Figure 1. There are 4 external PEs connected to the coprocessor, each of which sends an opcode, an operation valid signal and operands a, b. There is a preprocessing module in the coprocessor that sequentially adjusts the operations of the 4 PEs and sends them to the pipeline for calculation[14].

Framework of linear algebra core functions on multi-core processors

The linear algebraic model on a multicore processor is a time-varying iterative optimization method whose main purpose is to improve the algorithm solution accuracy by imposing constraints on the inputs, outputs and performance metrics[15]. The multicore processor is shown in Figure 2. In practice, we would like to find an optimal solution to achieve optimal control. However, the desired goal may not be ideal or even have errors or defects, which may lead to inefficient operation of the system.

Figure 2

Multicore processor

The multicore processor can operate, control and analyze multiple discrete individual objects, and it is capable of automatic operation in different environments with its intelligent microcomputer with multiple functions. It uses advanced algorithm design methods to optimize the processing process. It is widely used in systems for complex data calculation and real-time monitoring due to the advantages of high accuracy and scalability of multi-channel functions; at the same time, it can use multiple single-point discretization solvers to transform complex problems into several simple algebraic equations or vector matrices, which simplifies the complexity and computation and improves the computation accuracy[16]. The internal structure of the multi-core processor is shown in Figure 3.

Figure 3

Internal structure of a multicore processor

Multicore hybrid network data transmission refers to the use of different communication protocols to exchange information between multiple nodes, and its specific data transmission structure is shown in Figure 4.

Figure 4

Data transfer structure

Linear algebra kernel function framework optimization on multi-core processors

There are four typical multi-core processors: Hydra processors, Piranha processors, IBM Power series multi-core processors and Cell processors.

Hydra processor: Hydra is a typical high-order inertial system, which consists of multiple processors. The processor has better fault tolerance, and it can be optimized to a greater extent when it performs special processing on some specific objects. Its main function is to generate error signals during motion to improve dynamic performance; at the same time, it can also provide a certain amount of information to help control strategy adjustment and optimize decision-making. The Hydra processor structure is shown in Figure 5.

Figure 5

Hydra processor structure

Piranha processor: Piranhaov et al. (1996) proposed a classical solver-based optimization method that replaces the traditional linear mapping calculus by using coefficients with nonlinear terms and a finite order as an input variable. The basic idea is that the function can be approximately fitted to zero under certain conditions. The algorithm can effectively solve the local optimal solution problem; it is also able to overcome the disadvantages such as the limitation of the number of iterations and the non-convergence problem, and it has high global convergence. The Piranha processor structure is shown in Figure 6.

Figure 6

Piranha processor architecture

IBM Power Series Multicore Processor: IBMPower Series Multicore Processor (ARM) is a non-linear design language with virtualization. It is implemented in the form of graphics, functions, etc. on different interfaces to simulate the trajectory of things moving in the real world. The chip is capable of real-time monitoring and automatic tracking functions of objects, and realizes the control and processing capabilities of human-computer interaction interface by connecting with computer communication system; at the same time, it can complete various operation processes according to user instructions; it can also realize the processing and storage of different data information according to the actual requirements of users[17]. The processing flow chart of the data packet of the processor is shown in Figure 7.

Figure 7

Flow chart of packet processing for multi-core processors

Cell processor: Cell is a high-performance, highly integrated, high-speed and low-energy products, its core function mainly refers to the different functional structures and in a certain frequency range can work properly. Cell processor structure is shown in Figure 8.

Figure 8

Cell processor architecture

In the process of optimizing the design of a multicore processor, the most important thing is how to translate the optimal objective function into something that is easy to understand and test. This requires specific analysis of different types of optimization objects. This places high demands on the multicore computing drive system, the key technologies involved in various integrated development environments, and other aspects. Therefore, it is very difficult to accomplish the above problems and achieve their optimal performance. At the same time, there are interactions and constraints among the units, which leads to some difficult challenges in the design process. Therefore, researchers must perform specific analysis for different types of optimization objects before determining the best solution.

Experiment and result analysis
System simulation

The simulation of multi-core processor mainly refers to the use of MATLAB development platform to complete the modular modeling of software system functions, and through the optimization results to achieve different levels of algorithm performance analysis, so as to provide theoretical guidance for practical applications.

In the design of the program, we must first consider its versatility and scalability. That is, it is necessary to fully consider various user needs (such as system upgrades, etc.), data structure types and characteristics requirements; secondly, it is also necessary to meet the principles of stability and reliability and ease of use; again, it is necessary to ensure that the developed software has a high fault tolerance, which can improve the performance index of the multi-core processor, which also improves the reliability of the multi-core processor. When designing the program, the system is first divided into several modules, and then each functional block is divided and analyzed by algorithm separately; finally, a suitable solution is determined and debugging is completed according to the software test results.

Linear algebraic processing on multicore processors

The processing of linear algebraic operations on multicore processors refers to the variation of parameters according to different models. It simplifies individual optimization problems when certain conditions are satisfied.

There are interconnections and interactions among the optimization variables, so when multiple factors are uncertain, they need to be further analyzed and calculated; if both factors can be considered for an objective function to design the optimal planning scheme, the linear algebra operation on the multi-core processor can be divided into: first solving the polynomial for a single constraint, then converting all optimization problems into a single constraint under certain conditions, and then integrating the polynomial solution of a single constraint into a feasible optimization problem. Then, the polynomial solution of the single constraint is integrated into a feasible optimization problem; finally, the single constraint is used as the objective function for solving. The controllability of the multi-core processor facilitates the implementation of multiple optimization objects (i.e., multiple types and different depths) and can simplify the algorithm to a certain extent, and it can also effectively improve the computational efficiency, resulting in more accurate and precise optimal solutions.

Experimental results and discussion

During the optimization design of the multicore processor, the software algorithm was improved considering the need for practical applications. First, the program initialization was set to 3 iterative loop mode. Then, the relationship between system performance and stability was calculated for six different types of expansion functions and 8-bit single-step operation strategy, and the optimal solution was derived by comparing and analyzing each expansion function and the advantages and disadvantages of their corresponding algorithms; finally, the optimization process of the multicore processor was designed under the premise of ensuring that the model can achieve the requirement of high precision and fast response.

Based on the summary of the above experimental results, this paper proposes an optimization scheme for multicore computing. The orthogonal gradient function with high speed, low power consumption, high efficiency and large storage capacity requirements is established according to the single-step operation method; the subgrid method is used to improve the conventional algorithm to achieve fast convergence. Based on the genetic algebra model for the original input space is transformed to obtain the optimal solution and compared with the conventional solution method has better performance indicators (i.e., on the original basis): fewer single-step iterations and fewer parameters; based on the wavelet analysis method optimization procedure is simpler, the scheme can be used for the effect of multi-core computing and real-time control.

Conclusion

In the long history of modern science and technology development, computer technology has become an indispensable part of the development of human society. With the continuous improvement of computer technology and manufacturing process level, people are conducting in-depth research on intelligent machines. The multi-core processor is a new type of high-speed precision computing system (FATS), which is one of the most cutting-edge, advanced and efficient and stable control algorithms in the past 20 years. It has the advantages of small size, high efficiency and multiple functions; it is widely used in military field and computer network field due to its powerful nonlinear planning ability and good global optimization performance, and it plays an increasingly important role in the military field. As a complex system, the performance of multicore processor is also uneven or unpredictable, so the optimization design method of fitting polynomial algorithm is very important. And in practical applications, the performance of a multicore processor as a nonlinear process is also affected by a variety of factors. Therefore, in this paper, we propose a simulation calculation method that combines an improved linear programming problem solver and a first-order single-loop model.

Figure 1

The overall structure of the system
The overall structure of the system

Figure 2

Multicore processor
Multicore processor

Figure 3

Internal structure of a multicore processor
Internal structure of a multicore processor

Figure 4

Data transfer structure
Data transfer structure

Figure 5

Hydra processor structure
Hydra processor structure

Figure 6

Piranha processor architecture
Piranha processor architecture

Figure 7

Flow chart of packet processing for multi-core processors
Flow chart of packet processing for multi-core processors

Figure 8

Cell processor architecture
Cell processor architecture

Klilou Abdessamad, Arsalane Assia. Parallel implementation of pulse compression method on a multi-core digital signal processor[J]. International Journal of Electrical and Computer Engineering (IJECE), 2020, 10(6):23–28 AbdessamadKlilou AssiaArsalane Parallel implementation of pulse compression method on a multi-core digital signal processor[J] International Journal of Electrical and Computer Engineering (IJECE) 2020 10 6 23 28 10.11591/ijece.v10i6.pp6541-6548 Search in Google Scholar

Mohammad Reza Heidari Iman, Pejman Yaghmaie. A software control flow checking technique in multi-core processors[J]. International Journal of Embedded Systems, 2020, 13(2):121–128 ImanMohammad Reza Heidari YaghmaiePejman A software control flow checking technique in multi-core processors[J] International Journal of Embedded Systems 2020 13 2 121 128 10.1504/IJES.2020.108861 Search in Google Scholar

Zenan Huo, Gang Mei, Giampaolo Casolla, Fabio Giampaolo. Designing an efficient parallel spectral clustering algorithm on multi-core processors in Julia[J]. Journal of Parallel and Distributed Computing, 2020, 138(C):42–51 HuoZenan MeiGang CasollaGiampaolo GiampaoloFabio Designing an efficient parallel spectral clustering algorithm on multi-core processors in Julia[J] Journal of Parallel and Distributed Computing 2020 138 (C): 42 51 10.1016/j.jpdc.2020.01.003 Search in Google Scholar

Tomasz Borejko, Krzysztof Marcinek, Krzysztof Siwiec, Paweł Narczyk, Adam Borkowski, Igor Butryn, Arkadiusz Łuczyk, Daniel Pietroń, Maciej Plasota, Szymon Reszewicz, Łukasz Wiechowski, Witold A. Pleskacz. NaviSoC: High-Accuracy Low-Power GNSS SoC with an Integrated Application Processor[J]. Sensors, 2020, 20(4):18–24 BorejkoTomasz MarcinekKrzysztof SiwiecKrzysztof NarczykPaweł BorkowskiAdam ButrynIgor ŁuczykArkadiusz PietrońDaniel PlasotaMaciej ReszewiczSzymon WiechowskiŁukasz PleskaczWitold A. NaviSoC: High-Accuracy Low-Power GNSS SoC with an Integrated Application Processor[J] Sensors 2020 20 4 18 24 10.3390/s20041069707097532079088 Search in Google Scholar

Takashi NAKADA, Hiroyuki YANAGIHASHI, Kunimaro IMAI, Hiroshi UEKI, Takashi TSUCHIYA, Masanori HAYASHIKOSHI, Hiroshi NAKAMURA. An Energy-Efficient Task Scheduling for Near Real-Time Systems on Heterogeneous Multicore Processors[J]. IEICE Transactions on Information and Systems, 2020, E103. D(2):47–54 TakashiNAKADA HiroyukiYANAGIHASHI KunimaroIMAI HiroshiUEKI TakashiTSUCHIYA MasanoriHAYASHIKOSHI HiroshiNAKAMURA An Energy-Efficient Task Scheduling for Near Real-Time Systems on Heterogeneous Multicore Processors[J] IEICE Transactions on Information and Systems 2020 E103 D 2 47 54 10.1587/transinf.2019EDP7101 Search in Google Scholar

Vasileios Tenentes, Shidhartha Das, Daniele Rossi, Bashir M. Al Hashimi. Run-time Protection of Multi-core Processors from Power-Noise Denial-of-Service Attacks[J]. IEEE Transactions on Device and Materials Reliability, 2020, PP(99):270–298 TenentesVasileios DasShidhartha RossiDaniele Al HashimiBashir M. Run-time Protection of Multi-core Processors from Power-Noise Denial-of-Service Attacks[J] IEEE Transactions on Device and Materials Reliability 2020 PP 99 270 298 10.1109/TDMR.2020.2994272 Search in Google Scholar

Empirical Analysis of Cache-Efficient In-place Matrix Transposition on Multicore Processors[J]. Recent Trends in Parallel Computing, 2019, 6(2):201–223 Empirical Analysis of Cache-Efficient In-place Matrix Transposition on Multicore Processors[J] Recent Trends in Parallel Computing 2019 6 2 201 223 Search in Google Scholar

Dobosz Anna, Jastrzębski Piotr, Lecko Adam. On Certain Differential Subordination of Harmonic Mean Related to a Linear Function[J]. Symmetry, 2021, 13(6):20–24 AnnaDobosz PiotrJastrzębski AdamLecko On Certain Differential Subordination of Harmonic Mean Related to a Linear Function[J] Symmetry 2021 13 6 20 24 10.3390/sym13060966 Search in Google Scholar

Afriliansyah T, Zulfahmi Z. Architecture Model Optimization of Cyclical Order Algorithm with Binary Sigmoid and Linear Function for Prediction[J]. Journal of Physics: Conference Series, 2021, 1899(1):31–42 AfriliansyahT ZulfahmiZ Architecture Model Optimization of Cyclical Order Algorithm with Binary Sigmoid and Linear Function for Prediction[J] Journal of Physics: Conference Series 2021 1899 1 31 42 10.1088/1742-6596/1899/1/012088 Search in Google Scholar

Muhammad Atif Sattar, Felix E Arcilla. Robust the implied volatility linear function for ad hoc Black Scholes approaches[J]. Journal of Stock & Forex Trading, 2021, 9(3):37–39 SattarMuhammad Atif ArcillaFelix E Robust the implied volatility linear function for ad hoc Black Scholes approaches[J] Journal of Stock & Forex Trading 2021 9 3 37 39 Search in Google Scholar

Wan Kai, Sun Hua, Ji Mingyue, Tuninetti Daniela, Caire Giuseppe. Cache-Aided General Linear Function Retrieval.[J]. Entropy (Basel, Switzerland), 2020, 23(1):101–124 KaiWan HuaSun MingyueJi DanielaTuninetti GiuseppeCaire Cache-Aided General Linear Function Retrieval.[J] Entropy (Basel, Switzerland) 2020 23 1 101 124 10.3390/e23010025782425633375319 Search in Google Scholar

Voronenko Andrey A., Okuneva Anna S.. Universal functions for linear functions depending on two variables[J]. Discrete Mathematics and Applications, 2020, 30(5):145–150 Voronenko AndreyA. Okuneva AnnaS. Universal functions for linear functions depending on two variables[J] Discrete Mathematics and Applications 2020 30 5 145 150 10.1515/dma-2020-0032 Search in Google Scholar

Voronenko A. A., Okuneva A. S.. Universal Functions for Classes of Linear Functions of Three Variables[J]. Computational Mathematics and Modeling, 2020, 31(3):43–49 VoronenkoA. A. OkunevaA. S. Universal Functions for Classes of Linear Functions of Three Variables[J] Computational Mathematics and Modeling 2020 31 3 43 49 10.1007/s10598-020-09501-y Search in Google Scholar

Dinh Cong Huong, Dao Thi Hai Yen. Interval observers for linear functions of states and unknown inputs of nonlinear fractional-order systems with time delays[J]. Computational and Applied Mathematics, 2020, 39(3):26–34 HuongDinh Cong YenDao Thi Hai Interval observers for linear functions of states and unknown inputs of nonlinear fractional-order systems with time delays[J] Computational and Applied Mathematics 2020 39 3 26 34 10.1007/s40314-020-01190-y Search in Google Scholar

V. P. Zastavnyi. A Generalization of Schep's Theorem on the Positive Definiteness of a Piecewise Linear Function[J]. Mathematical Notes, 2020, 107(3):41–56 ZastavnyiV. P. A Generalization of Schep's Theorem on the Positive Definiteness of a Piecewise Linear Function[J] Mathematical Notes 2020 107 3 41 56 10.1134/S0001434620050272 Search in Google Scholar

Bonett Douglas G, Price Robert M. Interval estimation for linear functions of medians in within-subjects and mixed designs.[J]. The British journal of mathematical and statistical psychology, 2020, 73(2):37–48 Bonett DouglasG Price RobertM Interval estimation for linear functions of medians in within-subjects and mixed designs.[J] The British journal of mathematical and statistical psychology 2020 73 2 37 48 10.1111/bmsp.1217131062346 Search in Google Scholar

Songjun Han, Fuqiang Tian. A review of the complementary principle of evaporation: from the original linear relationship to generalized nonlinear functions[J]. Hydrology and Earth System Sciences, 2020, 24(5):31–52 HanSongjun TianFuqiang A review of the complementary principle of evaporation: from the original linear relationship to generalized nonlinear functions[J] Hydrology and Earth System Sciences 2020 24 5 31 52 10.5194/hess-24-2269-2020 Search in Google Scholar

Articoli consigliati da Trend MD

Pianifica la tua conferenza remota con Sciendo