Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations
Pubblicato online: 18 mar 2025
Pagine: 257 - 277
Ricevuto: 03 ott 2024
Accettato: 14 gen 2025
DOI: https://doi.org/10.2478/jaiscr-2025-0013
Parole chiave
© 2025 Junyan Qian et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
To enhance the interpretability of black-box machine learning, model-agnostic explanations have become a focal point of interest. This paper introduces Pixel-based Local Interpretable Model-agnostic Explanations (PLIME), a method that generates perturbation samples via pixel clustering to derive raw explanations. Through iterative refinement, it reduces the number of features, culminating in an optimal feature set that best predicts the model’s score. PLIME increases the relevance of features associated with correct predictions in the explanations. A comprehensive evaluation of PLIME is conducted against LIME and SHAP, focusing on faithfulness, stability, and minimality. Additionally, the predictions from both PLIME and LIME are utilized in Random Input Sampling Explanations (RISE) to compare minimality. The results demonstrate PLIME’s significant advantages in stability, faithfulness, and minimality.