Uneingeschränkter Zugang

Bimodal and trimodal image fusion: A study of subjective scores and objective measures

, , , ,  und   
13. Feb. 2025

Zitieren
COVER HERUNTERLADEN

O. Zelmati, B. Bondzulic, B. Pavlovic, I. Tot, and S. Merrouche, “Study of subjective and objective quality assessment of infrared compressed images”, Journal of Electrical Engineering, vol. 73, no. 2, pp. 73–87, doi: 10.2478/jee-2022-0011, 2022. Search in Google Scholar

D. Peric, B. Livada, M. Peric, and S. Vujic, “Thermal imager range: Predictions, expectations, and reality”, Sensors, vol. 19, no. 15, pp. 3313, doi: 10.3390/s19153313, 2019. Search in Google Scholar

R. Gade, and T. B. Moeslund, “Thermal cameras and applications: A survey”, Machine Vision and Applications, vol. 25, no. 1, pp. 245-262, doi: 10.1007/s00138-013-0570-5, 2014. Search in Google Scholar

D. L. Hickman, and S. J. Shepperd, “Image fusion systems for surveillance applications: Design options and constraints for a tri-band camera”, Proceedings SPIE Infrared Technology and Applications XLVII, pp. 310-328, doi: 10.1117/12.2584984, 2021. Search in Google Scholar

C. Jiang, H. Ren, H. Yang, H. Huo, P. Zhu, Z. Yao, J. Li, M. Sun, and S. Yang, “M2FNet: Multi-modal fusion network for object detection from visible and thermal infrared images”, International Journal of Applied Earth Observation and Geoinformation, vol. 130, pp. 103918, doi: 10.1016/j.jag.2024.103918, 2024. Search in Google Scholar

R. Li, M. Zhou, D. Zhang, Y. Yan, and Q. Huo, “A survey of multi-source image fusion”, Multimedia Tools and Applications, vol. 83, no. 6, pp. 18573-18605, doi: 10.1007/s11042-023-16071-9, 2024. Search in Google Scholar

Y. Hua, W. Xu, and Y. Ai, “A residual ConvNeXt-based network for visible and infrared image fusion”, 4th International Conference on Electronic Communication and Artificial Intelligence (ICECAI), Guangzhou, China, May 12-14, pp. 370-376, doi: 10.1109/ICECAI58670.2023.10176540, 2023. Search in Google Scholar

R. Soroush, Y. Baleghi, “NIR/RGB image fusion for scene classification using deep neural networks”, The Visual Computer, vol. 39, no. 7, pp. 2725-2739, doi: 10.1007/s00371-022-02488-0, 2023. Search in Google Scholar

X. Zhang, and Y. Demiris, “Visible and infrared image fusion using deep learning”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no 8, pp. 10535-10554, doi: 10.1109/TPAMI.2023.3261282, 2023. Search in Google Scholar

Y. Liu, S. Liu, and Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation”, Information Fusion, vol. 24, pp. 147-164, doi: 10.1016/j.inffus.2014.09.004, 2015. Search in Google Scholar

H. Xu, J. Ma, J. Jiang, X. Guo, and H. Ling, “U2Fusion: A unified unsupervised image fusion network”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 1, pp. 502-518, doi: 10.1109/TPAMI.2020.3012548, 2022. Search in Google Scholar

J. Ma, L. Tang, F. Fan, J. Huang, X. Mei, and Y. Ma, “SwinFusion: Cross-domain long-range learning for general image fusion via Swin Transformer”, IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 7, pp. 1200-1217, doi 10.1109/JAS.2022.105686, 2022. Search in Google Scholar

A. Toet, M. A. Hogervorst, and A. R. Pinkus, “The TRICLOBS dynamic multi-band image data set for the development and evaluation of image fusion methods”, PLoS One, vol. 11, no. 12, pp. 0165016, doi: 10.1371/journal.pone.0165016, 2016. Search in Google Scholar

K. Xiao, X. Kang, H. Liu, and P. Duan, “MOFA: A novel dataset for multi-modal image fusion applications”, Information Fusion, vol. 96, pp. 144-155, doi: 10.1016/j.inffus.2023.03.012, 2023. Search in Google Scholar

Z. Liu, H. Mao, C.Wu, C. Feichtenhofer, T. Darrell, and S. Xie, “A ConvNet for the 2020s, IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, June 18-24, pp. 11966-11976, doi: 10.1109/CVPR52688.2022.01167, 2022. Search in Google Scholar

P. J. Burt, and E. H. Adelson, “The Laplacian pyramid as a compact image code”, IEEE Transactions on Communication, vol. 31, no. 4, pp. 532-540, doi: 10.1016/B978-0-08-051581-6.50065-9, 1983. Search in Google Scholar

X. Zhang, P. Ye, and G. Xiao, “VIFB: A visible and infrared image fusion benchmark”, IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, June 14-19, pp. 468-478, doi: 10.1109/CVPRW50498.2020.00060, 2020. Search in Google Scholar

S. Singh, H. Singh, G. Bueno, O. Deniz, S Singh, H. Monga, P. N. Hrisheekesha, and A. Pedraza, “A review of image fusion: Methods, applications and performance metrics”, Digital Signal Processing, vol. 137, pp. 104020, doi: 10.1016/j.dsp.2023.104020, 2023. Search in Google Scholar

A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a ‘completely blind’ image quality analyzer”, IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209-212, doi: 10.1109/LSP.2012.2227726, 2013. Search in Google Scholar

A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain”, IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695-4708, doi: 10.1109/TIP.2012.2214050, 2012. Search in Google Scholar

N. Venkatanath, D. Praneeth, M. C. Bh, S. S. Channappayya, and S. S. Medasani, “Blind image quality evaluation using perception based features”, Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February - 01 March, pp. 1-6, doi: 10.1109/NCC.2015.7084843, 2015. Search in Google Scholar

Z. M. Laidouni, B. Bondžulić, D. Bujaković, V. Petrović, T. Adli, and M. Andrić, “BTIFD: Bimodal and trimodal image fusion database”, Mendeley Data, V1, available at: https://data.mendeley.com/datasets/btnws5tbcm/1, 2024. Search in Google Scholar

Sprache:
Englisch
Zeitrahmen der Veröffentlichung:
6 Hefte pro Jahr
Fachgebiete der Zeitschrift:
Technik, Einführungen und Gesamtdarstellungen, Technik, andere