Pixel-Based Clustering for Local Interpretable Model-Agnostic Explanations
Publié en ligne: 18 mars 2025
Pages: 257 - 277
Reçu: 03 oct. 2024
Accepté: 14 janv. 2025
DOI: https://doi.org/10.2478/jaiscr-2025-0013
Mots clés
© 2025 Junyan Qian et al., published by Sciendo
This work is licensed under the Creative Commons Attribution 4.0 International License.
To enhance the interpretability of black-box machine learning, model-agnostic explanations have become a focal point of interest. This paper introduces Pixel-based Local Interpretable Model-agnostic Explanations (PLIME), a method that generates perturbation samples via pixel clustering to derive raw explanations. Through iterative refinement, it reduces the number of features, culminating in an optimal feature set that best predicts the model’s score. PLIME increases the relevance of features associated with correct predictions in the explanations. A comprehensive evaluation of PLIME is conducted against LIME and SHAP, focusing on faithfulness, stability, and minimality. Additionally, the predictions from both PLIME and LIME are utilized in Random Input Sampling Explanations (RISE) to compare minimality. The results demonstrate PLIME’s significant advantages in stability, faithfulness, and minimality.