1. bookAHEAD OF PRINT
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
access type Otwarty dostęp

A mathematical model of PCNN for image fusion with non-sampled contourlet transform

Data publikacji: 20 May 2022
Tom & Zeszyt: AHEAD OF PRINT
Zakres stron: -
Otrzymano: 19 Mar 2022
Przyjęty: 14 Apr 2022
Informacje o czasopiśmie
License
Format
Czasopismo
eISSN
2444-8656
Pierwsze wydanie
01 Jan 2016
Częstotliwość wydawania
2 razy w roku
Języki
Angielski
Abstract

Non-sampled contourlet transform (NSCT) is a kind of non-down sampling image transformation, without spectrum aliasing, and the high-frequency region component can be further decomposed into various direction information, which has a good expression of image details. Therefore, this paper proposes an image fusion technology based on NSCT. Considering the different characteristics of low-frequency and high-frequency region components, the PCNN parallel method and ‘golden section method’ are used as double fusion rules, respectively in high- and low-frequency regions, whichperform the merge operation on the low area component. Experiments show that this algorithm has great advantages in preserving the details of image contour, texture and edge.

Keywords

MSC 2010

Introduction

In essence, image fusion can be understood as data fusion. Considering that the same scene captured by the camera may have different focus positions, resulting in different information clarity of an image, because the image can be synthesised and more information can be obtained by using image merging technology. The fusion of multi-focused images is the synthesis of two or more images with different focal lengths, filtering out the fuzzy parts of two or more images and retaining their clear parts to achieve better observation and analysis.

Image fusion technology is playing an increasingly important role in modern aerospace, automatic control, remote sensing and telemetry, and medicine, especially in military command [2,3,4,5].

There are many kinds of image merging technologies, which can be realised in spatial domain or frequency domain. If the spatial domain is used, the comparison of pixel values is the core idea of merging. If frequency domain merging is used, the ideal frequency domain transformation algorithm plays a vital role. The research shows that the image merging technology in the frequency domain can fuse more information and has a good effect. The other class is the transformed domain-based methods. These methods use multiscale transformation as a tool to extract salient features of the images. A typical wavelet-based method [6,7,8,9], Reference [8] uses discrete wavelet decomposition to realise the mapping of images from the spatial domain to the frequency domain. The merging rules completed in the frequency domain can better search the detailed information of images and achieve ideal merging results.

Two-dimensional discrete wavelet transform is formed based on the tensor product of one-dimensional wavelets. It has three directions, including horizontal, vertical and diagonal directions. It is optimal for representing point singularity, but it is not optimal for the representation of two-dimensional information such as edge, contour and curve in the image, resulting in a large number of invalid decompositions. In recent years, new multi-scale geometric analysis tools have been proposed to solve two-dimensional or higher anisotropy. As a new multi-scale transform, Ridgelet [10], Curvelet [11], and Contourlet [12], has been applied to many data processing fields, such as compression, image merging and so on.

Contourlet transform is a data matrix analysis method based on geometric features, which can decompose the image in multiple scales and directions, and is suitable for the analysis of line singular features. However, because it contains a down sampling process, It has no translation invariance, resulting in the pseudo-Gibbs effect and image distortion. Given the above problem, a non-sampled contourlet transform (NSCT) with translation invariance is proposed in reference [13]. NSCT retains the excellent characteristics of CT: it meets the anisotropic scale relationship and has good directivity.

It is easy to capture the edge, texture and other details in the image. It is suitable for expressing natural images with rich details and direction information. After NSCT decomposition, The image is finally decomposed into low-frequency frequency bands and several high-frequency bands. In the existing literature, there are many discussions on the importance of high and low components (coefficients) obtained after image decomposition,. and the algorithm based on region energy is often used. The corresponding fusion rules are established. However, the imaging mechanism of different sensors leads to a great difference in the images acquired. The different features of image information are represented only based on the regional energy method.

A pulse-coupled neural network (PCNN) is put forward to in references [14] and [15]. PCNN, as a new kind of artificial neural network, which is different from the existing neural network, has the characteristics of synchronous excitation and variable threshold. Using PCNN to realise image fusion can be regarded as a global image fusion method. This method can retain more detailed information such as image contour and edge, and obtain a good fusion effect. In previous studies, literature [16] used the Laplace energy of image blocks as the input stimulus of neurons in PCNN; literature [17] used the contrast pyramid decomposition coefficient to select the input stimulus of neurons in PCNN, which performed well in the fusion effect; consider applying PCNN in pixel-level image fusion, references [18] defines the pixel by pixel definition of the image, and PCNN can use this parameter as the connection strength of corresponding neurons. Experiments show that PCNN is used to realise image mergeing in pixel-level model can effectively capture the contour of the image and other details, and the fusion effect is ideal; Literature [19] combines the advantages of wavelet transform and PCNN and proposes a joint fusion method. The link strength of all neurons is selected with the same value based on experience. Compared with the existing fusion algorithm based on image block energy comparison, the proposed algorithm has improved both subjective vision and objective evaluation indexes.

Since NSCT has translation invariance, the low band is the smooth region of the image, and the high band is the region where the contour, texture and other information of the original image matrix are located. According to the characteristics of these two frequency bands, after NSCT decomposes the image, a double fusion strategy is needed so that the smooth region and detail region can be merged effectively at the same time. For low band information, the golden section method in literature [20] is adopted to realise adaptive search for the optimal fusion weights. Using parallel PCNN technology in high-frequency components can effectively fuse details and keep the integrity of image details. The algorithm can fuse the low-frequency subband coefficients adaptively. This fusion algorithm is applied to natural images and medical images respectively. Compared with other traditional image fusion algorithms, it has achieved good fusion results.

Mathematical model
Mathematical model of NSCT

NSCT is mainly composed of two parts: non-down sampling tower decomposition (NSP) and non-down sampling directional-filter-bank (NSDFB). The structure diagrams of both are expressed in Figure 1(a). Among them, NSP is used to complete the scale decomposition of the image and divide the image into multiple frequency levels. The number of scale decomposition is usually set between 2 and 4. This decomposition has no downsampling operation. Therefore, the size of the high-frequency component and low-frequency component obtained after decomposition is the same as that of the source image. Due to the lack of directional description of the high-frequency subband information, NSDSB is used in high-frequency components to further divide each high-frequency component into the information of each direction component, and the number of directional subbands is usually an integer power of 2 (Figure 1b).

Fig. 1

(a). Decomposition of NSCT two-layer results. (b) NSCT high-frequency subband diagram. NSCT, non-sampled contourlet transform

The structure of NSP consists of two filter banks with non-sampling characteristics. There is no sampling process, so it is translation invariant. After NSP decomposes the image into various scales, NSDFB can divide an image into arbitrary power directions of 2 on each scale, and meet the translation invariance. NSCT is redundant. The redundancy of NSCT is W=i=0I12di W = \sum\nolimits_{i = 0}^{I - 1} {2^{{d_i}}} , where 2di represents the number of directions of decomposition corresponding to the high-frequency scale components 2i.

Mathematical model of PCNN

PCNN breaks the limitation of the existing artificial neural network, which only uses the limited attributes of biological neurons for model construction. It is a network with a feedback structure formed by the interconnection of multiple neurons based on eckhorn, which imitates the neuronal activities of the biological visual cortex. Structurally, each neuron can be thought of as having three parts: receiving part, modulation part and pulse generation part (as shown in Figure 2).

Fig. 2

Single neural cell model

When processing the image of M×N, the idea of PCNN is to operate pixel values in the spatial domain and take them as the input of neurons. Therefore, the size of the neural network composed of PCNN neurons is consistent with the image size. Among them, the activity of single-neuron Nij can be expressed by the following formula: Fij[n]=eαFFij[n1]+Sij+VFklMijklYkl[n1] {F_{ij}}[n] = {e^{ - {\alpha _F}}}{F_{ij}}[n - 1] + {S_{ij}} + {V_F}\sum\nolimits_{kl} {M_{ijkl}}{Y_{kl}}[n - 1] Lij[n]=eαLLij[n1]+VLklMijklYkl[n1] {L_{ij}}[n] = {e^{ - {\alpha _L}}}{L_{ij}}[n - 1] + {V_L}\sum\nolimits_{kl} {M_{ijkl}}{Y_{kl}}[n - 1] Lij[n]=eαLLij[n1]+VLklMijklYkl1[n1]+VNklMijklYkl2[n1] {L_{ij}}[n] = {e^{ - {\alpha _L}}}{L_{ij}}[n - 1] + {V_L}\sum\nolimits_{kl} {M_{ijkl}}Y_{kl}^1[n - 1] + {V_N}\sum\nolimits_{kl} {M_{ijkl}}Y_{kl}^2[n - 1] Uij[n]=Fij[n](1+βLij[n]) {U_{ij}}[n] = {F_{ij}}[n]\left( {1 + \beta {L_{ij}}[n]} \right) Tij[n]=eαTTij[n1]+VTYij[n] {T_{ij}}[n] = {e^{ - {\alpha _T}}}{T_{ij}}[n - 1] + {V_T}{Y_{ij}}[n] Yij[n]={1Uij[n]Tij[n]0otherwise {Y_{ij}}[n] = \left\{ {\matrix{ 1 \hfill & {{U_{ij}}[n] \ge {T_{ij}}[n]} \hfill\cr 0 \hfill & {{\rm{otherwise}}}\hfill \cr } } \right. Where: Sij, Uij and Yij respectively represent the external input of neurons, also known as external stimulation, internal behaviour parameters and external output parameters; Lij and Fij respectively represent the link domain input channel of neurons and the feedback domain input channel of neurons. The connection weight coefficient between neurons is described by a matrix and represented by M and W, respectively. VF and VL (VN) correspond to the connection domain amplification coefficient and feedback domain amplification coefficient, respectively; Tij is the output of the threshold function, and VT is the threshold amplification coefficient; αL, αF and αT are time constants, αL corresponding to link domain, αF corresponding to feedback domain and αT corresponding to variable threshold function.

The unique neuron capture characteristic of the PCNN model – the ignition of a neuron will cause the capture and ignition of adjacent neurons with similar brightness to the neuron, which can automatically realise information transmission and information coupling. Therefore, this unique characteristic of PCNN lays a foundation for the application of PCNN in image fusion. This paper adopts the bilateral PCNN model with parallel structure, which is characterised by selecting any one of the two images to be fused as the master neuron of PCNN, and the remaining one is the slave PCNN neuron. The activity equation of the master neuron is calculated by equations (1), (3)(6), and the activity equation of the slave neuron is calculated by equations (1), (2) and (4)(6). With the help of the capture characteristics of PCNN, the double-layer parallel PCNN network can input the ignition information of the slave PCNN neuron into the link domain of the corresponding neuron of the main PCNN, to fuse the neuron information in the slave PCNN with the corresponding neuron information of the main PCNN. That is, after setting the master and slave neurons to ignite a neuron in the slave PCNN, due to the setting of master and slave neurons, when a neuron in PCNN ignites, the ignition information of the neuron will be transmitted to the connected neurons and the corresponding neurons of the master PCNN at the same time. Although this method increases certain system complexity, it can make the image fusion effect more ideal.

Implementation of the fusion algorithm

Choosing reasonable fusion rules can enrich the information update of the fused image. The high and low frequency region components of the image correspond to different image features. Therefore, in practice, selecting different fusion rules for operation is conducive to the overall image fusion. Since NSCT has a large low-frequency coefficient and is the area where most information about the image is stored, equation (7) is used to calculate the corresponding fusion result, and the fusion weight is expressed in ω, as follows: SLF(n,m)=ω*SLA(n,m)+(1ω)*SLB(n,m) S_L^F(n,m) = \omega *S_L^A(n,m) + (1 - \omega )*S_L^B(n,m)

In the low-frequency region, selecting proper fusion weights can increase the effect of consolidation. The basic idea of the ‘golden section method’ is to estimate the position of the optimal weight by using the optimal search method. The specific process is to eliminate the ‘bad points’, retain the ‘good points’ interval, and constantly narrow the search interval. After repeatedly comparing the function value of the trial position, the position of the optimal weight can be obtained. In the process of optimisation, the objective function selects the edge fusion quality index QE(ω) [19], which can well reflect the retention of detailed information such as the contour of the image.

After image registration, image fusion processing can be carried out. Taking two image fusion as an example, the fusion algorithm constructed in this paper is based on the exhaustive method, constantly searching for the best coefficient value of low-frequency region components based on the NSCT domain and the processing of high-frequency components by parallel PCNN. The specific operation process is shown in Figure 3.

Fig. 3

Implementation of image fusion algorithm by NSCT. NSCT, non-sampled contourlet transform

Firstly, the original images to be merged with A and B are decomposed by layer NSCT, respectively. NSP is applied to decompose the image to achieve low component on various scales, that is, the low component is {SlA(n,m),0lL1} \{ S_l^A(n,m),0 \le l \le L - 1\} , {SlB(n,m),0lL1} \{ S_l^B(n,m),0 \le l \le L - 1\} , and the passband is {Dl,iA(n,m),0lL1,1ikl} \{ D_{l,i}^A(n,m),0 \le l \le L - 1,1 \le i \le {k_l}\} , {Dl,iB(n,m),0lL1,1ikl} \{ D_{l,i}^B(n,m),0 \le l \le L - 1,1 \le i \le {k_l}\} . Then, The high-frequency directional subbands are obtained by NSDFB decomposition, which are, {Dl,iA(n,m),0lL1,1ikl} \{ D_{l,i}^A(n,m),0 \le l \le L - 1,1 \le i \le {k_l}\} , {Dl,iB(n,m),0lL1,1ikl} \{ D_{l,i}^B(n,m),0 \le l \le L - 1,1 \le i \le {k_l}\} , kl represents the number of subbands in the high component direction, l refers to the corresponding decomposition layer, and Dl,iA(n,m) D_{l,i}^A(n,m) is the ith directional subband of the image. The specific fusion steps are as follows:

Step 1: for SlA(n,m) S_l^A(n,m) , SlB(n,m) S_l^B(n,m) , the golden section method is used to realise the adaptive search of the optimal low-component fusion weight ω*;

Step 2: Using ω* calculated by step 1 search, For the two images to be fused, formula (8) is adopted to select and merge coefficients for low-frequency components. The objective function is taken as the edge fusion quality index QE. Represented by T. It is assumed that the initial value of interval [a, b] to be searched is set to [0,1], ɛ = 0.01, which represents the best algorithm for selecting weights, which is determined by searching the minimum distance, and the specific implementation steps of the optimal fusion weight ω* adaptive search algorithm are as follows:

Trial points ω1 and ω2 for calculating the fusion weights of low frequency subbands in the initial search interval [a, b], ω1 = a + 0.382 * (ba), ω2 = a + bω1, ω1, ω2 ∈ [a,b];

Calculate value T (ω1);

Calculate value T (ω2);

if T (ω1) > T (ω2), then update a = w1, and go to step 5). Otherwise go to step 6);

if |ba| < ɛ, the ω* = (a + b)/2, return. If not update ω1 and ω2 · ω1 = ω2, ω2 = a + 0.618 * (ba), go to step 2);

update b = ω2;

if |ba| < ɛ, ω* = (a + b)/2, return. If not update ω1 and ω2: ω2 = ω1, ω1 = a + 0.382 * (ba), then go to step 2);

Step 3: For high-frequency directional subbands on various scales of multi-sensor source images, The input to the main PCNN is Dl,iA(n,m) D_{l,i}^A(n,m) , the input from PCNN is Dl,iB(n,m) D_{l,i}^B(n,m) . Then set the initial value of PCNN parameters;

Step 4: For each iteration of the master-slave PCNN network:

Calculate each neuron of the slave PCNN model according to formulas (1), (2), (4) and (6), feedback the output of the slave PCNN neuron to the link domain of the corresponding master PCNN neuron, and calculate the network output value of PCNN according to formula (1), (3) ∼ formula (6);

A series of multi-scale fusion images output from the PCNN network are reconstructed, and the final reconstructed image is the fusion result of this iteration;

According to the following formula, calculate the information content of the fused image H(x)=ip(x)ilogp(x)i H\left( x \right) = - \sum\limits_i p{\left( x \right)_i}\log p{\left( x \right)_i}

Here, H (x) represents the average amount of information of the fused image, also known as information entropy, and p(x) is the probability.

Step 5: In parallel PCNN networks, the fusion rule of maximum information entropy is adopted, because information entropy can reflect the amount of information in an image. The larger the value is, the larger the amount of information is and the richer the image information is.

Step 6: For the fused low component and high component, the NSCT inverse transform is used to restore the fused image.

Experimental results and analysis

The advantages and disadvantages of the combined images need to be comprehensively evaluated by using both objective evaluation data and subjective visual feeling. Among them, the objective data indicators need to reflect the retention of image contour and other detailed information. Therefore, the following data indicators are selected for evaluation:

Image fusion evaluation index

Objective evaluation index of image merging:

Mutual information (MI): MI is used to reflect the correlation degree of two data. In image merging technology, the correlation between the two merged images can be reflected by MI. The size of this data value directly reflects the amount of merged information. In practice, the higher the parameter value, the better the merging effect. IFB=k=0L1j=0L1PFB(k,j)log2PFB(k,j)PF(k)PB(j) {I_{FB}} = \sum\limits_{k = 0}^{L - 1} \sum\limits_{j = 0}^{L - 1} {P_{FB}}(k,j)\log 2{{{P_{FB}}(k,j)} \over {{P_F}(k){P_B}(j)}} IFB=k=0L1j=0L1PFB(k,j)log2PFB(k,j)PF(k)PB(j) {I_{FB}} = \sum\limits_{k = 0}^{L - 1} \sum\limits_{j = 0}^{L - 1} {P_{FB}}(k,j)\log 2{{{P_{FB}}(k,j)} \over {{P_F}(k){P_B}(j)}}

Where, the probability densities of images A, B and F can be represented by PA, PB and PF respectively, and PFA(k, i) and PFB(k, i) are the joint probability density of the fusion results and the images to be fused, respectively. Equation (11) MI is defined as: the merged image contains the ratio of the total sum of the A and B information in the merged images to the sum of the information entropy of the original A and B images t and normalises the interval [0,1], that is: MI=IFA+IFBHA+HB MI = {{{I_{FA}} + {I_{FB}}} \over {{H_A} + {H_B}}}

Edge-dependent fusion quality index (EFQI). As an advanced parameter index for objective evaluation to evaluate the image fusion results, EFQI can effectively reflect the edge preservation of the fusion results and whether there is a ringing effect in the details such as contour and edge, and can measure its size. QEFAI(A,B,F)=Qω(A,B,F)1ε×Qω(A1,B1,F1)ε {Q_{EFAI}}(A,B,F) = {Q_\omega }{(A,B,F)^{1 - \varepsilon }} \times {Q_\omega }{({A_1},{B_1},{F_1})^\varepsilon } here QEFAI(A,B,F) is described as EFQI and Qω(A,B,F) is the evaluation index of weighted fusion. ɛ ∈ [0, 1], the parameter describes the importance of contour details in this image. When it is close to 1, it indicates that the contour is more important.

Analysis of comparative experimental results

The images in the experiment are decomposed into four layers. In the wavelet-based fusion strategy, when selecting the wavelet basis function, we should consider the problem that the fusion result may produce artificial effects on vision, especially ringing and jitter. This is related to the discrete characteristics of sampling when downsampling is applied. If a non-integer number of signals is shifted and there is a constant local area connected to the sharp edge, the ringing will be enhanced. After interpolation, translation and resampling, the new sampling cannot be expressed as a constant in the transform domain but tends to oscillate (Gibbs phenomenon). In the wavelet-based fusion strategy, short decomposition or reconstruction filters should be used to avoid ringing. However, a very short filter will make the frequency selectivity worse. Considering comprehensively, Daubechies filter with 8 or 10 coefficients can provide better execution results for multi-scale image fusion. The wavelet basis selected in WTF and SWTF is’ db8 ’. Both CT and NSCT adopt the classical ‘bior’ tower decomposition and ‘DFB’ directional banks. The decomposition number of subbands from fine-scale to coarse-scale is 16, 8, 4 and 4. The fusion rule used in the multi-scale and multi-direction image merging method as a comparison in the experiment is: the approximate approximation coefficient at the minimum scale takes the mean value, other decomposition coefficients are generally in complex form, so the mode, i.e. absolute value, is selected, then the size is compared, and the larger one is selected as the final fusion coefficient.

Table 1 lists the comparison of fusion objective indexes obtained by several fusion methods. Through the comparison of experimental data, the required information, which can be clearly reflected in the table, is that the average amount of information, edge fusion quality and cross-correlation information of various algorithms are compared. The algorithm in this paper has improved in these three indicators compared with other algorithms, indicating that the algorithm using dual fusion rules can retain more details in image merging.

Comparison of experimental results

Images Fused method

Criteria WTF SWTF CTF Proposed

ClockA/B MI 6.6299 7.1007 7.4105 8.4101
QAB/F 0.6789 0.7211 0.7298 0.7521
BaboonA/B MI 5.5965 6.1906 6.3610 7.5002
QAB/F 0.5701 0.6114 0.6418 0.7124
HatA/B MI 6.9603 7.0109 7.4201 7.6017
QAB/F 0.6659 0.6941 0.7802 0.8121
CT&MIRA/B MI 3.1006 4.0247 4.2099 4.3640
QAB/F 0.6174 0.6429 0.6802 0.7461

MI: Mutual information

Figures 4 and 5 show the fusion effect of the image merging method using wavelet transform and NSCT as scale decomposition. It is obvious from the image that the constructed merging technology can preserve most of the contour and other details of the image, such as contour, edge and texture. Because NSCT is a non-down sampling transform, the edge is smooth and there is no ‘artefact’ phenomenon.

Fig. 4

Image fusion results of ClockA/B

Fig. 5

Image fusion results of CT/MRI

From Figures 4 and 5, there is a ringing phenomenon in WTF, which is caused by the fact that WTF is not a non-sampling transformation. The fusion effect of SWTF and CTF is not ideal, because they do not fully consider the difference between coefficients.

Conclusions

NSCT, based on the framework of Contourlet Transform, has abandoned the down-sampling operation and can achieve multiple frequency segmented decomposition of undistorted images and multiple high-frequency decompositions for directional coefficients. It is characterised by the translation invariance of the decomposed high and low-frequency component coefficients, so there is no spectrum aliasing and image distortion. When it is applied to image fusion, PCNN fusion rules are adopted, and the golden section method is adopted for low-frequency information. The best weight can be adaptively selected as a low-frequency fusion coefficient to realise fusion. Parallel PCNN is adopted for high-frequency direction subbands, which can detect edges. Contour and other details achieve better fusion effect. Simulation results show that the algorithm improves both the objective evaluation index and subjective visual quality, and is very applicable in the actual merging of natural images and medical images.

Fig. 1

(a). Decomposition of NSCT two-layer results. (b) NSCT high-frequency subband diagram. NSCT, non-sampled contourlet transform
(a). Decomposition of NSCT two-layer results. (b) NSCT high-frequency subband diagram. NSCT, non-sampled contourlet transform

Fig. 2

Single neural cell model
Single neural cell model

Fig. 3

Implementation of image fusion algorithm by NSCT. NSCT, non-sampled contourlet transform
Implementation of image fusion algorithm by NSCT. NSCT, non-sampled contourlet transform

Fig. 4

Image fusion results of ClockA/B
Image fusion results of ClockA/B

Fig. 5

Image fusion results of CT/MRI
Image fusion results of CT/MRI

Comparison of experimental results

Images Fused method

Criteria WTF SWTF CTF Proposed

ClockA/B MI 6.6299 7.1007 7.4105 8.4101
QAB/F 0.6789 0.7211 0.7298 0.7521
BaboonA/B MI 5.5965 6.1906 6.3610 7.5002
QAB/F 0.5701 0.6114 0.6418 0.7124
HatA/B MI 6.9603 7.0109 7.4201 7.6017
QAB/F 0.6659 0.6941 0.7802 0.8121
CT&MIRA/B MI 3.1006 4.0247 4.2099 4.3640
QAB/F 0.6174 0.6429 0.6802 0.7461

Liu Y, Wang L, Cheng J, et al. Multi-focus image fusion: A survey of the state of the art. Information Fusion, 2020, 64: pp.71–91. LiuY WangL ChengJ Multi-focus image fusion: A survey of the state of the art Information Fusion 2020 64 71 91 10.1016/j.inffus.2020.06.013 Search in Google Scholar

Jinju J, Santhi N, Ramar K, et al. Spatial frequency discrete wavelet transform image fusion technique for remote sensing applications. Engineering Science and Technology, an International Journal, 2019, 22(3): pp.715–726. JinjuJ SanthiN RamarK Spatial frequency discrete wavelet transform image fusion technique for remote sensing applications Engineering Science and Technology, an International Journal 2019 22 3 715 726 10.1016/j.jestch.2019.01.004 Search in Google Scholar

Ghassemian H. A review of remote sensing image fusion methods. Information Fusion, 2016, 32:pp. 75–89. GhassemianH A review of remote sensing image fusion methods Information Fusion 2016 32 75 89 10.1016/j.inffus.2016.03.003 Search in Google Scholar

Ma J, Ma Y, Li C. Infrared and visible image fusion methods and applications: A survey. Information Fusion, 2019, 45: pp.153–178. MaJ MaY LiC Infrared and visible image fusion methods and applications: A survey Information Fusion 2019 45 153 178 10.1016/j.inffus.2018.02.004 Search in Google Scholar

Liu Y, Chen X, Wang Z, et al. Deep learning for pixel-level image fusion: Recent advances and future prospects. Information Fusion, 2018, 42: pp.158–173. LiuY ChenX WangZ Deep learning for pixel-level image fusion: Recent advances and future prospects Information Fusion 2018 42 158 173 10.1016/j.inffus.2017.10.007 Search in Google Scholar

Ma J, Yu W, Liang P, et al. Fusion GAN: A generative adversarial network for infrared and visible image fusion. Information Fusion, 2019, 48: pp.11–26. MaJ YuW LiangP Fusion GAN: A generative adversarial network for infrared and visible image fusion Information Fusion 2019 48 11 26 10.1016/j.inffus.2018.09.004 Search in Google Scholar

Ma J, Liang P, Yu W, et al. Infrared and visible image fusion via detail preserving adversarial learning. Information Fusion, 2020, 54: pp.85–98. MaJ LiangP YuW Infrared and visible image fusion via detail preserving adversarial learning Information Fusion 2020 54 85 98 10.1016/j.inffus.2019.07.005 Search in Google Scholar

Rhif M, Ben Abbes A, Farah I R, et al. Wavelet transform application for/in non-stationary time-series analysis: a review. Applied Sciences, 2019, 9(7): pp.1345. RhifM Ben AbbesA FarahI R Wavelet transform application for/in non-stationary time-series analysis: a review Applied Sciences 2019 9 7 1345 10.3390/app9071345 Search in Google Scholar

Singh D, Garg D, Singh Pannu H. Efficient landsat image fusion using fuzzy and stationary discrete wavelet transform. The Imaging Science Journal, 2017, 65(2): pp.108–114. SinghD GargD Singh PannuH Efficient landsat image fusion using fuzzy and stationary discrete wavelet transform The Imaging Science Journal 2017 65 2 108 114 10.1080/13682199.2017.1289629 Search in Google Scholar

Ma G, Zhao J. Quaternion ridgelet transform and curvelet transform. Advances in Applied Clifford Algebras, 2018, 28(4): pp.1–21. MaG ZhaoJ Quaternion ridgelet transform and curvelet transform Advances in Applied Clifford Algebras 2018 28 4 1 21 10.1007/s00006-018-0897-0 Search in Google Scholar

Ma J, Plonka G. The curvelet transform. IEEE signal processing magazine, 2010, 27(2): pp.118–133. MaJ PlonkaG The curvelet transform IEEE signal processing magazine 2010 27 2 118 133 10.1109/MSP.2009.935453 Search in Google Scholar

Li B, Peng H, Wang J. A novel fusion method based on dynamic threshold neural P systems and nonsubsampled contourlet transform for multi-modality medical images. Signal Processing, 2021, 178:pp. 107793. LiB PengH WangJ A novel fusion method based on dynamic threshold neural P systems and nonsubsampled contourlet transform for multi-modality medical images Signal Processing 2021 178 107793 10.1016/j.sigpro.2020.107793 Search in Google Scholar

Manchanda M, Sharma R. An improved multimodal medical image fusion algorithm based on fuzzy transform. Journal of Visual Communication and Image Representation, 2018, 51: pp.76–94. ManchandaM SharmaR An improved multimodal medical image fusion algorithm based on fuzzy transform Journal of Visual Communication and Image Representation 2018 51 76 94 10.1016/j.jvcir.2017.12.011 Search in Google Scholar

Ding S, Zhao X, Xu H, et al. NSCT-PCNN image fusion based on image gradient motivation. IET Computer Vision, 2018, 12(4): pp.377–383. DingS ZhaoX XuH NSCT-PCNN image fusion based on image gradient motivation IET Computer Vision 2018 12 4 377 383 10.1049/iet-cvi.2017.0285 Search in Google Scholar

Zhang S, Liu B, Huang F. Multimodel fusion method via sparse representation at pixel-level and feature-level. Optical Engineering, 2019, 58(6): pp.063105. ZhangS LiuB HuangF Multimodel fusion method via sparse representation at pixel-level and feature-level Optical Engineering 2019 58 6 063105 10.1117/1.OE.58.6.063105 Search in Google Scholar

Guo Y, Wang C, Lei S, et al. A Framework of Spatio-Temporal Fusion Algorithm Selection for Landsat NDVI Time Series Construction. ISPRS International Journal of Geo-Information, 2020, 9(11): pp.665. GuoY WangC LeiS A Framework of Spatio-Temporal Fusion Algorithm Selection for Landsat NDVI Time Series Construction ISPRS International Journal of Geo-Information 2020 9 11 665 10.3390/ijgi9110665 Search in Google Scholar

Panigrahy C, Seal A, Mahato N K. MRI and SPECT image fusion using a weighted parameter adaptive dual channel PCNN. IEEE Signal Processing Letters, 2020, 27: PP.690–694. PanigrahyC SealA MahatoN K MRI and SPECT image fusion using a weighted parameter adaptive dual channel PCNN IEEE Signal Processing Letters 2020 27 690 694 10.1109/LSP.2020.2989054 Search in Google Scholar

Na Y, Zhao L, Yang Y, et al. Guided filter-based images fusion algorithm for CT and MRI medical images. IET Image Processing, 2018, 12(1): pp.138–148. NaY ZhaoL YangY Guided filter-based images fusion algorithm for CT and MRI medical images IET Image Processing 2018 12 1 138 148 10.1049/iet-ipr.2016.0920 Search in Google Scholar

Na Y, Zhao L, Yang Y, et al. Guided filter-based images fusion algorithm for CT and MRI medical images. IET Image Processing, 2018, 12(1): pp.138–148. NaY ZhaoL YangY Guided filter-based images fusion algorithm for CT and MRI medical images IET Image Processing 2018 12 1 138 148 10.1049/iet-ipr.2016.0920 Search in Google Scholar

Arain M S, Khan M A, Kalwar M A. Optimization of Target Calculation Method for Leather Skiving and Stamping: Case of Leather Footwear Industry. International Journal of Education and Management Studies, 2020, 7(1): pp.15–30. ArainM S KhanM A KalwarM A Optimization of Target Calculation Method for Leather Skiving and Stamping: Case of Leather Footwear Industry International Journal of Education and Management Studies 2020 7 1 15 30 Search in Google Scholar

Feng W, Yi L, Sato M. Near range radar imaging based on block sparsity and cross-correlation fusion algorithm. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018, 11(6): pp.2079–2089. FengW YiL SatoM Near range radar imaging based on block sparsity and cross-correlation fusion algorithm IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2018 11 6 2079 2089 10.1109/JSTARS.2018.2797056 Search in Google Scholar

Penault-Llorca F, Rudzinski E R, Sepulveda A R. Testing algorithm for identification of patients with TRK fusion cancer. Journal of clinical pathology, 2019, 72(7): pp.460–467 Penault-LlorcaF RudzinskiE R SepulvedaA R Testing algorithm for identification of patients with TRK fusion cancer Journal of clinical pathology 2019 72 7 460 467 10.1136/jclinpath-2018-205679658948831072837 Search in Google Scholar

Polecane artykuły z Trend MD

Zaplanuj zdalną konferencję ze Sciendo