An Empirical Analysis of User Preferences Regarding XAI Metrics
Original version
Lecture Notes in Computer Science. 2024, 14775, 96-110. 10.1007/978-3-031-63646-2_7Abstract
In this paper, we explore the problem of evaluating explanations in Explainable AI. While there are some objective metrics to measure the quality of explanations, these metrics may not always be fully representative of the quality perceived by end-users. We present an empirical investigation through an online evaluation that gathers data on user preferences regarding explanations generated by three categories of explainers applied to image classification: instance-based explanations (nearest neighbors and counterfactuals), and two families of attribution-based methods (feature-based and segmentation-based methods). Then, we examine the correlation between these objective XAI metrics and user preferences. The results show that certain metrics are strongly correlated with user satisfaction and that the perceived quality of an explanation may vary depending on the background knowledge of end-users. An Empirical Analysis of User Preferences Regarding XAI Metrics