Pubblicato online: 04 giu 2025
Pagine: 3 - 9
Ricevuto: 28 giu 2024
Accettato: 08 nov 2024
DOI: https://doi.org/10.2478/aei-2025-0005
Parole chiave
© 2025 Miroslava Matejová et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
In recent years, we have witnessed the rapid development of artificial intelligence systems and their presence in various fields. These systems are very efficient and powerful, but often unclear and insufficiently transparent. Explainable artificial intelligence (XAI) methods try to solve this problem. XAI is still a developing area of research, but it already has considerable potential for improving the transparency and trustworthiness of AI models. Thanks to XAI, we can build more responsible and ethical AI systems that better serve people’s needs. The aim of this study is to focus on the role of the user. Part of the work is a comparison of several explainability methods such as LIME, SHAP, ANCHORS and PDP on a selected data set from the field of medicine. The comparison of individual explainability methods from various aspects was carried out using a user study.