1. bookAHEAD OF PRINT
Detalles de la revista
License
Formato
Revista
eISSN
2444-8656
Primera edición
01 Jan 2016
Calendario de la edición
2 veces al año
Idiomas
Inglés
Acceso abierto

Modeling of fractional differential equation in cloud computing image fusion algorithm

Publicado en línea: 15 Jul 2022
Volumen & Edición: AHEAD OF PRINT
Páginas: -
Recibido: 19 Feb 2022
Aceptado: 23 Apr 2022
Detalles de la revista
License
Formato
Revista
eISSN
2444-8656
Primera edición
01 Jan 2016
Calendario de la edición
2 veces al año
Idiomas
Inglés
Introduction

Image processing is closely related to many disciplines, such as pattern recognition, artificial intelligence, expert system, computer graphics and so on. Image processing is the premise of image understanding and recognition. The effect of image processing model and algorithm has a great impact on subsequent image recognition and understanding [1]. At present, image processing mainly includes image denoising, image amplification, image restoration, image compression, image segmentation, image super-resolution reconstruction and so on. The image to be processed often contains many different components, which mainly include image cartoon, edge, texture and noise. In the process of image processing, it is often necessary to distinguish and process these different image components. For example, image denoising is actually to distinguish image information and noise; The most important thing of image compression is to distinguish and process the important structures (edges and textures) of the image; The super-resolution reconstruction of image actually needs to more clearly represent the important structures such as edge and texture of image [2]. In order to better distinguish and process the different components in the image, it is necessary to describe these different components, that is, appropriate mathematical modeling of these different components.

In the cloud computing environment, large data video images contain a lot of information. There are usually multiple images for the same scene, and the information description is insufficient. Therefore, it is necessary to fuse multiple video image information together to obtain a more accurate and reliable description of the same scene. Instead of simply superimposing video images together, it forms a composite image containing more information to reduce the uncertainty of information [3]. Image fusion based on Fractional Partial Differential Equations (fpde) is a new interdisciplinary branch. Compared with the image denoising method based on integer partial differential equations, the fractional partial differential equation method for image denoising has the advantages of high computational efficiency, good accuracy, strong noise suppression ability and strong edge feature retention ability, The research on numerical method of fractional image denoising model has important application value [4]. H Li and others proposed an image denoising model based on spatial fractional partial differential equation, which can solve the shortcomings that the traditional low order integer partial differential equation denoising model is prone to ladder effect and the high order integer partial differential equation denoising model has poor denoising effect, The fractional order denoising model is constructed by using the fractional derivative as a measure to describe the image, and then the eider Lagrange equation of the corresponding energy functional is obtained according to the relevant theories and properties of the variational method [5]. Zhang C. and others proposed a global variational image denoising method based on the exchange movement space, which not only fully utilizes the prior knowledge of the image, but also adaptively sets the regularization parameters according to the local features of the noisy image, but This method has the disadvantages of complex structure, large memory usage and long time consumption [6].

Based on this research, a modeling research of fractional differential equation in cloud computing image fusion algorithm is proposed. Experimental results show that the proposed algorithm has good fusion effect.

Research Methods
Definition of fractional differential equation

If for function y = f(t) with independent variable t, the value range of independent variable is (a, b), on this interval, the v-order fractional differential equation of function f(t) is recorded as: aDbvf(t)=Dvf(t) _aD_b^vf\left( t \right) = {D^v}f\left( t \right) Where the order of calculus V is fractional, and when v > 0, vR it is called fractional differential; If V < 0, it is called fractional integral accordingly. In the general sense, the integer order differential refers to when the differential order v takes a positive natural number, that is, nN+; V = − n represents the integral of integer order in the general sense. Thus, the integral order form of calculus can be regarded as a special form of fractional differential equation, and the fractional differential equation can also be regarded as an extension of the integral order calculus.

The grumwald letnikov definition of fractional differential equations (hereinafter referred to as G-L definition) originates from the operation rules of classical integer order derivatives of continuous functions, and the fractional order is derived from the deduced integer order. The G-L definition of fractional differential equation is very suitable for signal processing. Because it can be converted into convolution operation, it is more practical.

The function f (T) is continuously differentiable in interval t ∈ [a, b] (a < b, aR, bR), and the first derivative of F (T) is: f(t)=limh0f(t+h)f(t)h f^\prime \left( t \right) = \mathop {\lim }\limits_{h \to 0} {{f\left( {t + h} \right) - f\left( t \right)} \over h} Where h is the step length of variable t in interval [a, b]. The value of H remains unchanged. According to the operation rules and theory of integer derivative of continuous function, it can be deduced that the second derivative of function is: f(t)=limh0f(t+2h)2f(t+2h)+f(t)h2 f^{''}\left( t \right) = \mathop {\lim }\limits_{h \to 0} {{f\left( {t + 2h} \right) - 2f\left( {t + 2h} \right) + f\left( t \right)} \over {{h^2}}} By analogy, the n-order derivative of the function can be obtained, that is, the n-order differential formula is: f(t)=limh01hnm=0n(1)m(nm)f(tmh) f^{''}\left( t \right) = \mathop {\lim }\limits_{h \to 0} {1 \over {{h^n}}}\sum\limits_{m = 0}^n {{{\left( { - 1} \right)}^m}} \left( {\matrix{ n \cr m \cr } } \right)f\left( {t - mh} \right) The G-L definition of fractional differential of function is to expand the integer order n into fractional order v, where V > 0 aGDtvf(t)=limh0hvm=0[tah](1)mΓ(v+1)m!(vm+1)!f(tmh) _a^GD_t^vf\left( t \right) = \mathop {\lim }\limits_{h \to 0} {h^{ - v}}\sum\limits_{m = 0}^{\left[ {{{t - a} \over h}} \right]} {{{\left( { - 1} \right)}^m}} {{\Gamma \left( {v + 1} \right)} \over {m!\left( {v - m + 1} \right)!}}f\left( {t - mh} \right) When the order is negative, the fractional integral equation of the function can be obtained, and the fractional integral formula defined by G-L can be obtained: aGDtvf(t)=limh0hvm=0[tah]Γ(v+m)m!Γ(v)!f(tmh) _a^GD_t^{ - v}f\left( t \right) = \mathop {\lim }\limits_{h \to 0} {h^{ - v}}\sum\limits_{m = 0}^{\left[ {{{t - a} \over h}} \right]} {{{\Gamma \left( {v + m} \right)} \over {m!\Gamma \left( v \right)!}}f\left( {t - mh} \right)} To sum up, the G-L definition of fractional integral can be expressed as: aGJtvf(t)=limh0hvm=0[tah](1)m(vm)f(tmh) _a^GJ_t^vf\left( t \right) = \mathop {\lim }\limits_{h \to 0} {h^{ - v}}\sum\limits_{m = 0}^{\left[ {{{t - a} \over h}} \right]} {{{\left( { - 1} \right)}^m}\left( {\matrix{ v \cr m \cr } } \right)} f\left( {t - mh} \right)

Scale space theory

Scale space theory is to use the scale transformation of the original video image in the cloud computing environment to obtain the scale space description sequence of the video image under multi-scale, complete the collection of the main contour of the above sequence scale space, and regard the collected main contour as the feature vector to extract the features on the edge of the video image and various resolutions [7]. The scale space belongs to the description of the region, not the description of the edge, and is defined as follows: fout=Kfin {f_{out}} = K{f_{in}} In equation (8), fin is used to describe all signals; fout is used to describe the signal obtained after convolution of fin and transform kernel K; If the extremum in fin and fout is lower than the extremum of the original video image, it is called k-scale space kernel, and the corresponding convolution transform is called scale transform. The scale space is generally obtained by smoothing and can be described by (x, σ), where x is used to describe the position parameters. When filtering the same video image through different scale smoothing parameters, the obtained image is the scale space of the original video image relative to the above smoothing function [8].

In Gaussian scale space, the same kind of feature points and edges have causality on different scales, that is, when the scale changes, new feature points will appear to a large extent, while existing feature points will shift or disappear to a large extent. The ambiguity caused by the above causality can not be avoided, but can be reduced. However, because the Gaussian kernel has the characteristics of translation and rotation invariance, the first derivative obtained by the Gaussian kernel will transform the Harris corner operator into the description of scale space. Gaussian kernel is described as follows: G(x;σ)=12πσ(x2/2σ) G\left( {x;\sigma } \right) = {1 \over {\sqrt {2\pi \sigma } }}\left( { - {x^2}/2\sigma } \right)

Large data video image scale space fusion algorithm in cloud computing environment

Large data video images in cloud computing environment are fused in scale space through the lifting static wavelet transform algorithm with translation invariance, because the algorithm not only has better local characteristics in spatial and frequency domain, but also has translation invariance, which can avoid the image distortion caused by the traditional lifting wavelet transform. Because the scale space can effectively use the inter layer correlation of wavelet coefficients to enhance the image edge and reduce the impact of noise, this section adds the scale space to the lifting static wavelet transform domain to realize big data video image fusion. The basic idea of scale space fusion of big data video images in cloud computing environment is as follows: firstly, the fused image is decomposed by lifting wavelet transform to obtain low-frequency subband coefficients and high-frequency subband coefficients in different scale spaces; Then, the fusion schemes for low-frequency subband coefficients and high-frequency subband coefficients are given, and the lifting static wavelet transform coefficients of the fused image are obtained; The final fused image is obtained by lifting the inverse static wavelet transform [9].

Lifting static wavelet transform decomposition

The decomposition and reconstruction diagram of lifting static wavelet transform is described in Figure 1.

Figure. 1

Schematic diagram of lifting static wavelet decomposition and reconstruction

In Figure. 1, Pl+1 and Ul+1 are used to describe the prediction and update operators in the process of layer L + 1 transformation in turn; al is used to describe the input signal, that is, the image to be fused in the big data video image in the cloud computing environment; dl+1. al+1 is used to describe the high-frequency signal and low-frequency signal after lifting static wavelet decomposition; Pil+1 P_i^{l + 1} , ujl+1 u_j^{l + 1} is used to describe the prediction and update coefficients of lifting static wavelet transform.

low frequency subband coefficient fusion

In the cloud computing environment, the low-frequency part of big data video image obtained after lifting static wavelet transform decomposition is the embodiment of the main energy of the image, which directly affects the contour of the video image. Therefore, selecting a reasonable low-frequency subband coefficient plays a very key role in enhancing the visual effect of the fused image. A large number of studies show that people's vision is very sensitive to the change of image local contrast, so the concept of local visual visibility which can reflect the contrast of video image is introduced. The formula is described as follows: RLL(x,y)=SML(x,y)I¯L(x,y)α+1 R{L_{L\left( {x,y} \right)}} = {{{S_{ML}}\left( {x,y} \right)} \over {{{\bar I}_L}{{\left( {x,y} \right)}^{\alpha + 1}}}} In equation (10), SML(x, y) is used to describe the optimized Laplace energy sum at the low-frequency subband coefficient position (x, y) of L layer. Represents the coefficient, with: I¯L(x,y)=1(2M+1)+(2N+1)×m=NMn=NNIL(x+m,y+n) {\bar I_L}\left( {x,y} \right) = {1 \over {\left( {2M + 1} \right) + \left( {2N + 1} \right)}} \times \sum\limits_{m = - N}^M {\sum\limits_{n = - N}^N {{I_L}} } \left( {x + m,y + n} \right) Where, (2m + 1) + (2n + 1) is used to describe the size of local area, taking 3 × 3; IL(x, y) is used to describe the low-frequency subband coefficient at the position (x, y) of scale L; α It is used to describe the visual constant, which is generally taken in the range of 0.6 ~ 0.7. It is mainly to make reasonable adjustment according to the overall brightness, so as to avoid the nonlinear interference between the over sensitive threshold and the background brightness.

A and B are used to describe two source images in big data video images in cloud computing environment. The detailed fusion process for low-frequency subband coefficients is as follows: ILF(x,y)={ILA(x,y),ILA(x,y)ILB(x,y)ILB(x,y),ILB(x,y)<ILB(x,y) I_L^F\left( {x,y} \right) = \left\{ {\matrix{ {I_L^A\left( {x,y} \right),I_L^A\left( {x,y} \right) \ge I_L^B\left( {x,y} \right)} \hfill \cr {I_L^B\left( {x,y} \right),I_L^B\left( {x,y} \right) < I_L^B\left( {x,y} \right)} \hfill \cr } } \right. Where ILF(x,y) I_L^F\left( {x,y} \right) , ILA(x,y) I_L^A\left( {x,y} \right) , ILB(x,y) I_L^B\left( {x,y} \right) are successively used to describe the low-frequency subband coefficients at the L-th layer position (x, y) of the fused image f and the source images a and B.

high frequency subband coefficient fusion

The high-frequency subband of the video image obtained after the lifting static wavelet transform is the embodiment of the image edge information. Because the Laplace operator can reflect the image gradient information, the Laplace operator is introduced based on the lifting static wavelet transform, and its formula is described as follows: PMLlk(x,y)=|2FlkI(x,y)FlkI(x1,y)FlkI(x+1,y)|+|2FlkI(x,y)FlkI(x,y1)FlkI(x,y+1)| \matrix{ {PML_l^k\left( {x,\,y} \right) = \left| {2F_l^kI\left( {x,\,y} \right) - F_l^kI\left( {x - 1,\,y} \right) - F_l^kI\left( {x + 1,\,y} \right)} \right| + } \hfill \cr {\left| {2F_l^kI\left( {x,\,y} \right) - F_l^kI\left( {x,y - 1} \right) - F_l^kI\left( {x,y + 1} \right)} \right|} \hfill \cr } In equation (13), FlkI(x,y) F_l^kI\left( {x,\,y} \right) is used to describe the multi-scale product of lifting static wavelet transform at the k-th direction position (x, y) of layer L; K is used to describe the horizontal, vertical and diagonal directions obtained after the image is decomposed by lifting static wavelet transform; PMLlk(x,y) PML_l^k\left( {x,\,y} \right) is used to describe the corresponding Laplacian and wavelet transform that affect the lifting static wavelet transform.

Because the vision system is relatively sensitive to the change of local contrast of video image, the concept of visual characteristic contrast is given, and its formula is described as follows: HCl,k(x,y)=ω[I¯l(x,y)]PMLlk(x,y)I¯l(x,y) H{C_{l,k}}\left( {x,y} \right) = \omega \left[ {{{\bar I}_l}\left( {x,y} \right)} \right]{{PML_l^k\left( {x,\,y} \right)} \over {{{\bar I}_l}\left( {x,y} \right)}} In equation (14), Īl(x, y) is used to describe the local regional mean of the low-frequency subband coefficient of layer L, and the formula is described as follows: I¯l(x,y)=1(2M+1)+(2N+1)×m=MMn=NNIl(x+m,y+n) {\bar I_l}\left( {x,y} \right) = {1 \over {\left( {2M + 1} \right) + \left( {2N + 1} \right)}} \times \sum\limits_{m = - M}^M {\sum\limits_{n = - N}^N {{I_l}\left( {x + m,\,y + n} \right)} } ω[Īl(x, y)] is used to describe the weight coefficient. Because there is a nonlinear relationship between the contrast sensitivity threshold and the background brightness, the weight coefficient corrected according to the average brightness of the local area is given. The formula is described as follows: ω[I¯l(x,y)]=[1I¯l(x,y)]α \omega \left[ {{{\bar I}_l}\left( {x,y} \right)} \right] = {\left[ {{1 \over {{{\bar I}_l}\left( {x,y} \right)}}} \right]^\alpha }

Results analysis and discussion
Fusion image definition test results

In order to further verify the effectiveness of this algorithm, the fusion results need to be evaluated by objective evaluation indexes. Image sharpness is the key index to evaluate the fusion algorithm. For the image with size (m, n), the gray level at f (I, J) is described by F (I, J). The image sharpness can be described as: ID=i=1M1j=1N1[f(i,j)x]2+[f(i,j)y]2(M1)(N1) {I_D} = {{\sum\limits_{i = 1}^{M - 1} {\sum\limits_{j = 1}^{N - 1} {\sqrt {{{\left[ {{{\partial f\left( {i,\,j} \right)} \over {\partial x}}} \right]}^2} + {{\left[ {{{\partial f\left( {i,\,j} \right)} \over {\partial y}}} \right]}^2}} } } } \over {\left( {M - 1} \right)\left( {N - 1} \right)}} X is used to describe the video image pixels, and f (x) is used to describe the sharpness evaluation index of the corresponding fused image. In order to effectively reflect the fusion effect, a function g(x) = f(x) − m is established, where M represents the constant and M = 8.1. G (x) is mainly established to more intuitively analyze the sharpness change of the fused video image when x gradually increases. The results are described in Fig. 2, The corresponding calculation amount is described in Figure. 3.

Figure 2

Clarity evaluation results of this method

Figure 3

Calculation quantity test results

By analyzing Figure. 2, it can be seen that when x gradually increases, the image definition becomes higher and higher, indicating that the fusion effect of the algorithm in this paper is very good. When x increases to a certain extent, G (x) tends to be stable, indicating that x is not the bigger the better. By analyzing Figure. 3, it can be seen that when x gradually increases, the amount of calculation also gradually increases, but on the whole, the amount of calculation is less, which shows that the algorithm in this paper can not only ensure the clarity of the fused image, but also has less amount of calculation.

Comprehensive evaluation results

In order to comprehensively evaluate the fusion effect of this algorithm, entropy, standard mean square deviation, signal-to-noise ratio and quality measurement index are tested as evaluation indexes.

The calculation formula of entropy is as follows: H=i=0LP(i) H = - \sum\limits_{i = 0}^L {P\left( i \right)} In equation (19), p(i) is used to describe the histogram normalization description of the image; L is used to describe the maximum gray value of pixels in the image. Entropy is the embodiment of the information contained in the image. With the gradual increase of entropy, the information contained in the image becomes richer and richer. The standard mean square deviation is calculated as follows: MSE=k=1kj=1j(P˜xi,yiPxi,yi)2 MSE = \sum\limits_{k = 1}^k {\sum\limits_{j = 1}^j {{{\left( {{{\tilde P}_{xi,yi}} - {P_{xi,yi}}} \right)}^2}} } The calculation formula of signal-to-noise ratio is as follows: SNR=101gk=1kj=1jPxi,yi2k=1kj=1j(P˜xi,yiPxi,yi)2 {S_{NR}} = 101g{{\sum\limits_{k = 1}^k {\sum\limits_{j = 1}^j {P_{xi,\,yi}^2} } } \over {\sum\limits_{k = 1}^k {\sum\limits_{j = 1}^j {{{\left( {{{\tilde P}_{xi,yi}} - {P_{xi,yi}}} \right)}^2}} } }} Where Pyi, xi is used to describe the gray value of the source image at the (xi, yi) pixel; P˜xi,yi {\tilde P_{xi,yi}} is used to describe the pixel gray value of the fused image at (xi, yi).

The standard mean square deviation is the embodiment of the retention degree of the fused image to the source image information. The greater the standard mean square difference, the lower the retention degree of the fused image to the source image information. The signal-to-noise ratio is opposite to the standard mean square deviation. The higher its value, the better the fusion effect of the algorithm [10].

The quality measurement index Q (a, B, f) measures the quality of the fused image by using the local quality, and its formula is described as follows: Q(A,B,F)=i=0Lp(i)k=1kj=1j(P˜xi,yiPxi,yi)2 Q\left( {A,\,B,\,F} \right) = {{\sum\limits_{i = 0}^L {p\left( i \right)} } \over {\sum\limits_{k = 1}^k {\sum\limits_{j = 1}^j {{{\left( {{{\tilde P}_{xi,yi}} - {P_{xi,yi}}} \right)}^2}} } }} Q (a, B, f) is in the range of [− 1, + 1]. The closer its value is to 1, the better the quality of the fused image is considered.

The sparse coding algorithm and random walk algorithm are tested by comparison. The three algorithms are used to fuse the big data video images in the cloud computing environment. The entropy, standard mean square deviation, signal-to-noise ratio and quality measurement index of the fused images are compared. The results are described in Table 1.

Objective evaluation of image fusion results of three algorithms

Algorithm Entropy Standard mean square deviation Signal to noise ratio Quality measurement index
Paper algorithm 7.1450 345.47 22.278 0.802
Sparse coding algorithm 7.0021 505.81 12.058 0.711
Random walk algorithm 7.0882 762.23 11.261 0.676
Conclusion

In this paper, the modeling of fractional differential equation in cloud computing image fusion algorithm is deeply studied, and a new scale space fusion algorithm of big data video image in cloud computing environment is proposed. The scale space theory is introduced, and the large data video images in cloud computing environment are fused in scale space through the lifting static wavelet transform algorithm with translation invariance. Experimental results show that the proposed algorithm has good fusion effect.

Figure. 1

Schematic diagram of lifting static wavelet decomposition and reconstruction
Schematic diagram of lifting static wavelet decomposition and reconstruction

Figure 2

Clarity evaluation results of this method
Clarity evaluation results of this method

Figure 3

Calculation quantity test results
Calculation quantity test results

Objective evaluation of image fusion results of three algorithms

Algorithm Entropy Standard mean square deviation Signal to noise ratio Quality measurement index
Paper algorithm 7.1450 345.47 22.278 0.802
Sparse coding algorithm 7.0021 505.81 12.058 0.711
Random walk algorithm 7.0882 762.23 11.261 0.676

Sulaiman T A, Bulut H, Atas S S. Optical solitons to the fractional Schrdinger-Hirota equation[J]. Applied Mathematics and Nonlinear Sciences, 2019, 4(2):535–542. SulaimanT A BulutH AtasS S Optical solitons to the fractional Schrdinger-Hirota equation [J] Applied Mathematics and Nonlinear Sciences 2019 4 2 535 542 10.2478/AMNS.2019.2.00050 Search in Google Scholar

Sulaiman Tukur Abdulkadir Bulut Hasan Department of Mathematics Federal University Dutse, Jigawa, Nigeria Department of Mathematics Firat University, Elazig, Turkey Department of Mathematics Education, Final International University, Kyrenia, Cyprus. The new extended rational SGEEM for construction of optical solitons to the (2+1)–dimensional Kundu–Mukherjee–Naskar model[J]. Applied Mathematics and Nonlinear Sciences, 2019, 4(2):513–522. SulaimanTukur Abdulkadir BulutHasan Department of Mathematics Federal University Dutse, Jigawa, Nigeria Department of Mathematics Firat University, Elazig, Turkey Department of Mathematics Education, Final International University, Kyrenia, Cyprus. The new extended rational SGEEM for construction of optical solitons to the (2+1)–dimensional Kundu–Mukherjee–Naskar model [J] Applied Mathematics and Nonlinear Sciences 2019 4 2 513 522 10.2478/AMNS.2019.2.00048 Search in Google Scholar

Dian R, Li S, Sun B, et al. Recent advance sand new guidelines on hyper spectral and multispectral image fusion [J]. Information Fusion, 2021, 69(2): 40–51. DianR LiS SunB Recent advance sand new guidelines on hyper spectral and multispectral image fusion [J] Information Fusion 2021 69 2 40 51 10.1016/j.inffus.2020.11.001 Search in Google Scholar

Qu Z, Huang X, Liu L. An improved algorithm of multi-exposure image fusion by detail enhancement [J]. Multimedia Systems, 2021, 27(1): 33–44. QuZ HuangX LiuL An improved algorithm of multi-exposure image fusion by detail enhancement [J] Multimedia Systems 2021 27 1 33 44 10.1007/s00530-020-00691-4 Search in Google Scholar

H Li, Yang M, Yu Z. Joint image fusion and super-resolution for enhanced visualization via semi-coupled discriminative dictionary learning and advantage embedding[J]. Neurocomputing, 2021, 422(4):62–84. HLi YangM YuZ Joint image fusion and super-resolution for enhanced visualization via semi-coupled discriminative dictionary learning and advantage embedding [J] Neurocomputing 2021 422 4 62 84 10.1016/j.neucom.2020.09.024 Search in Google Scholar

Zhang C. Convolution analysis operator for multimodal image fusion[J]. Procedia Computer Science, 2021, 183(5):603–608. ZhangC Convolution analysis operator for multimodal image fusion [J] Procedia Computer Science 2021 183 5 603 608 10.1016/j.procs.2021.02.103 Search in Google Scholar

Bhat S, D Koundal. Multi-focus image fusion techniques: a survey [J]. Artificial Intelligence Review, 2021(6):1–53. BhatS KoundalD Multi-focus image fusion techniques: a survey [J] Artificial Intelligence Review 2021 6 1 53 10.1007/s10462-021-09961-7 Search in Google Scholar

R Yang, Du B, Duan P, et al. Electro magnetic Induction Heating and Image Fusion of Silicon Photo voltaic Cell Electro thermography and Electroluminescence [J]. IEEE Transactions on Industrial Informatics, 2020, 16(7): 4413–4422. YangR DuB DuanP Electro magnetic Induction Heating and Image Fusion of Silicon Photo voltaic Cell Electro thermography and Electroluminescence [J] IEEE Transactions on Industrial Informatics 2020 16 7 4413 4422 10.1109/TII.2019.2922680 Search in Google Scholar

Zhang S, Liu F. Infrared and visible image fusion based on non-subsampled shearlet transform, regional energy, and co-occurrence filtering[J]. Electronics Letters, 2020, 56(15):761–764. ZhangS LiuF Infrared and visible image fusion based on non-subsampled shearlet transform, regional energy, and co-occurrence filtering [J] Electronics Letters 2020 56 15 761 764 10.1049/el.2020.0557 Search in Google Scholar

Sharma A M, Dogra A, Goyal B, et al. From pyramids to state-of-the-art: a study and comprehensive comparison of visible–infrared image fusion techniques[J]. IET Image Processing, 2020, 14(9):1671–1689. SharmaA M DograA GoyalB From pyramids to state-of-the-art: a study and comprehensive comparison of visible–infrared image fusion techniques [J] IET Image Processing 2020 14 9 1671 1689 10.1049/iet-ipr.2019.0322 Search in Google Scholar

Hemanth D J, Rajinikanth V, Rao V S, et al. Image fusion practice to improve the ischemic-stroke-lesion detection for efficient clinical decision making[J]. Evolutionary Intelligence, 2021(2):1–11. HemanthD J RajinikanthV RaoV S Image fusion practice to improve the ischemic-stroke-lesion detection for efficient clinical decision making [J] Evolutionary Intelligence 2021 2 1 11 10.1007/s12065-020-00551-0 Search in Google Scholar

Artículos recomendados de Trend MD

Planifique su conferencia remota con Sciendo