Otwarty dostęp

Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations

, , ,  oraz   
18 mar 2025

Zacytuj
Pobierz okładkę

To enhance the interpretability of black-box machine learning, model-agnostic explanations have become a focal point of interest. This paper introduces Pixel-based Local Interpretable Model-agnostic Explanations (PLIME), a method that generates perturbation samples via pixel clustering to derive raw explanations. Through iterative refinement, it reduces the number of features, culminating in an optimal feature set that best predicts the model’s score. PLIME increases the relevance of features associated with correct predictions in the explanations. A comprehensive evaluation of PLIME is conducted against LIME and SHAP, focusing on faithfulness, stability, and minimality. Additionally, the predictions from both PLIME and LIME are utilized in Random Input Sampling Explanations (RISE) to compare minimality. The results demonstrate PLIME’s significant advantages in stability, faithfulness, and minimality.

Język:
Angielski
Częstotliwość wydawania:
4 razy w roku
Dziedziny czasopisma:
Informatyka, Sztuczna inteligencja, Bazy danych i eksploracja danych