Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, Guangxi, China Guangxi Key Laboratory of Multi-Source Information Mining & Security, Guangxi Normal University, Guilin, GuangxiChina
Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, Guangxi, China Guangxi Key Laboratory of Multi-Source Information Mining & Security, Guangxi Normal University, Guilin, GuangxiChina
Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, Guangxi, China Guangxi Key Laboratory of Multi-Source Information Mining & Security, Guangxi Normal University, Guilin, GuangxiChina
Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, Guangxi, China Guangxi Key Laboratory of Multi-Source Information Mining & Security, Guangxi Normal University, Guilin, GuangxiChina
Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, Guilin, Guangxi, China Guangxi Key Laboratory of Multi-Source Information Mining & Security, Guangxi Normal University, Guilin, GuangxiChina
This work is licensed under the Creative Commons Attribution 4.0 International License.
Alejandro Barredo Arrieta, Natalia Díaz Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil López, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Inf. Fusion, 58:82–115, 2020.Search in Google Scholar
Waddah Saeed and Christian Omlin. Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Syst., 263:110273, 2023.Search in Google Scholar
Dang Minh, H Xiang Wang, Y Fen Li, and Tan N Nguyen. Explainable artificial intelligence: a comprehensive review. Artif. Intell. Rev., pages 1–66, 2022.Search in Google Scholar
Nataliya Shakhovska, Andrii Shebeko, and Yarema Prykarpatsky. A novel explainable AI model for medical data analysis. Journal of Artificial Intelligence and Soft Computing Research, 14(2):121–137, 2024.Search in Google Scholar
Ivan Laktionov, Grygorii Diachenko, Danuta Rutkowska, and Marek Kisiel-Dorohinicki. An explainable AI approach to agrotechnical monitoring and crop diseases prediction in Dnipro region of Ukraine. Journal of Artificial Intelligence and Soft Computing Research, 13(4):247–272, 2023.Search in Google Scholar
Guang Yang, Qinghao Ye, and Jun Xia. Unbox the black-box for the medical explainable ai via multi-modal and multi-centre data fusion: A mini-review, two showcases and beyond. Inf. Fusion, 77:29–52, 2022.Search in Google Scholar
Erico Tjoa and Cuntai Guan. A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE Trans. Neural Networks Learn. Syst., 32(11):4793–4813, 2020.Search in Google Scholar
Roel Henckaerts, Katrien Antonio, and Marie Pier Côté. When stakes are high: Balancing accuracy and transparency with model-agnostic interpretable data-driven surrogates. Expert Syst. Appl., 202:117230, 2022.Search in Google Scholar
Bas HM Van der Velden, Hugo J Kuijf, Kenneth GA Gilhuijs, and Max A Viergever. Explainable artificial intelligence (xai) in deep learning-based medical image analysis. Med. Image Anal., 79:102470, 2022.Search in Google Scholar
Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Singhal, Rana Omer, Pankesh Patel, Bin Qian, Zhenyu Wen, Tejal Shah, Graham Morgan, et al. Explainable ai (xai): Core ideas, techniques, and solutions. ACM Comput. Surv., 55(9):1–33, 2023.Search in Google Scholar
Vikas Hassija, Vinay Chamola, Atmesh Mahapatra, Abhinandan Singal, Divyansh Goel, Kaizhu Huang, Simone Scardapane, Indro Spinelli, Mufti Mahmud, and Amir Hussain. Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation, 16(1):45–74, 2024.Search in Google Scholar
Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, et al. Explainable artificial intelligence (xai) 2.0: A manifesto of open challenges and interdisciplinary research directions. Information Fusion, 106:102301, 2024.Search in Google Scholar
Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, and Jugal Kalita. Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction. Neurocomputing, page 128111, 2024.Search in Google Scholar
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “ why should i trust you?” explaining the predictions of any classifier. In Proc. ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., pages 1135–1144, 2016.Search in Google Scholar
Scott M Lundberg and Su In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.Search in Google Scholar
Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proc. IEEE Int. Conf. Comput. Vision, pages 618–626, 2017.Search in Google Scholar
Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explanation of black-box models. In BMVC - Br. Mach. Vis. Conf. Proc., 2018.Search in Google Scholar
Aaron Fisher, Cynthia Rudin, and Francesca Dominici. All models are wrong, but many are useful: Learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res., 20(177):1–81, 2019.Search in Google Scholar
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit, pages 2921–2929, 2016.Search in Google Scholar
Yao Wang, Chongzhi Zu, Kexin Fu, and Hanchuan Peng. Shapley values for feature selection: The good, the bad, and the axioms. IEEE Trans. Pattern Anal. Mach. Intell., 45(4):4918–4932, 2023.Search in Google Scholar
Muhammad Rehman Zafar and Naimul Khan. Deterministic local interpretable model-agnostic explanations for stable explainability. Mach. Learn. Knowl. Extr., 3(3):525–541, 2021.Search in Google Scholar
Sheng Shi, Yangzhou Du, and Wei Fan. Kernel-based lime with feature dependency sampling. In Proc. Int. Conf. Pattern Recognit., pages 9143–9148. IEEE, 2021.Search in Google Scholar
Qingyao Ai and Lakshmi Narayanan. R. Model-agnostic vs. model-intrinsic interpretability for explainable product search. In Proc. Int. Conf. Inf. Knowledge Manage, pages 5–15, 2021.Search in Google Scholar
Abiodun M Ikotun, Absalom E Ezugwu, Laith Abualigah, Belal Abuhaija, and Jia Heming. K-means clustering algorithms: A comprehensive review, variants analysis, and advances in the era of big data. Inf. Sci., 622:178–210, 2023.Search in Google Scholar
Kristina P Sinaga and Miin Shen Yang. Unsupervised k-means clustering algorithm. IEEE Access, 8:80716–80727, 2020.Search in Google Scholar
Taher M Ghazal. Performances of k-means clustering algorithm with different distance metrics. Intell. Autom. Soft Comput., 30(2):735–742, 2021.Search in Google Scholar
Heinrich Jiang, Jennifer Jang, and Samory Kpotufe. Quickshift++: Provably good initializations for sample-based mean shift. In Proc. Int. Conf. Mach. Learn., pages 2294–2303. PMLR, 2018.Search in Google Scholar
Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proc. Int. Conf. Pattern Recognit., pages 8188–8197, 2020.Search in Google Scholar
Teng Li, Amin Rezaeipanah, and ElSayed M Tag El Din. An ensemble agglomerative hierarchical clustering algorithm based on clusters clustering technique and the novel similarity measurement. J. King Saud Univ. Comput. Inf. Sci., 34(6):3828–3842, 2022.Search in Google Scholar
Xin Han, Ye Zhu, Kai Ming Ting, and Gang Li. The impact of isolation kernel on agglomerative hierarchical clustering algorithms. Pattern Recognit., 139:109517, 2023.Search in Google Scholar
Sven Kosub. A note on the triangle inequality for the jaccard distance. Pattern Recognit. Lett., 120:36–38, 2019.Search in Google Scholar
Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8):832, 2019.Search in Google Scholar
Irving Biederman. Recognition-by-components: a theory of human image understanding. Psychological review, 94(2):115, 1987.Search in Google Scholar
Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proc. IEEE Int. Conf. Comput. Vision, pages 3429–3437, 2017.Search in Google Scholar
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proc. IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit, pages 1–9, 2015.Search in Google Scholar
Feng Chen, Jiangshu Wei, Bing Xue, and Mengjie Zhang. Feature fusion and kernel selective in inception-v4 network. Appl. Soft Comput., 119:108582, 2022.Search in Google Scholar