Accesso libero

Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations

, , ,  e   
18 mar 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

To enhance the interpretability of black-box machine learning, model-agnostic explanations have become a focal point of interest. This paper introduces Pixel-based Local Interpretable Model-agnostic Explanations (PLIME), a method that generates perturbation samples via pixel clustering to derive raw explanations. Through iterative refinement, it reduces the number of features, culminating in an optimal feature set that best predicts the model’s score. PLIME increases the relevance of features associated with correct predictions in the explanations. A comprehensive evaluation of PLIME is conducted against LIME and SHAP, focusing on faithfulness, stability, and minimality. Additionally, the predictions from both PLIME and LIME are utilized in Random Input Sampling Explanations (RISE) to compare minimality. The results demonstrate PLIME’s significant advantages in stability, faithfulness, and minimality.

Lingua:
Inglese
Frequenza di pubblicazione:
4 volte all'anno
Argomenti della rivista:
Informatica, Intelligenza artificiale, Base dati e data mining