Evaluation of Instance-based Explanations: An In-depth Analysis of Counterfactual Evaluation Metrics, Challenges, and the CEval Toolkit
Journal article, Peer reviewed
Accepted version
Permanent lenke
https://hdl.handle.net/11250/3135652Utgivelsesdato
2024Metadata
Vis full innførselSamlinger
Sammendrag
In eXplainable Artificial Intelligence (XAI), instance-based explanations have gained importance as a method for illuminating complex models by highlighting differences or similarities between the samples and their explanations. The evaluation of these explanations is crucial for assessing their quality and effectiveness. However, the quantitative evaluation of instance-based explanation methods reveals inconsistencies and variations in terminology and metrics. Addressing this, our survey provides a unified notation for instance-based explanation evaluation metrics for instance-based explanations with a particular focus on counterfactual explanations. Further, it explores associated trade-offs, identifies areas for improvement, and offers a practical Python toolkit, CEval. Key contributions include a comprehensive survey of quantitative evaluation metrics, facilitating practical counterfactual evaluation with the package, and providing insights into explanation evaluation limitations and future directions.