A Twin XCBR System Using Supportive and Contrastive Explanations
Chapter
Published version
Permanent lenke
https://hdl.handle.net/11250/3086134Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
Sammendrag
Machine learning models are increasingly being applied in safety-critical domains. Therefore, ensuring their trustworthiness and reliability has become a priority. Uncertainty measures the lack of trust in these models, and explanation systems designed as twin systems can provide insights into model decisions to users. Case-based reasoning (CBR) is an experience-based problem-solving methodology with applications across various domains. In this work, we propose a novel approach to generate a twin system, specifically a multi-agent CBR system (MA-CBR system), which utilizes feature attribution-based Explainable Artificial Intelligence (XAI) techniques to explain black-box models in multi-class classifi- cation tasks. The proposed approach provides contrastive or supportive instance-based explanations, enabling users to interpret model outputs. Furthermore, we introduce an evaluation metric to assess the system’s quality based on its supportiveness for the performance of the underlying black-box model, which we measure through a confidence score. To evaluate the performance of our approach, we apply it to three distinct datasets with differing characteristics. Our results demonstrate the effectiveness of the proposed approach in generating explanations for black-box models in multi-class classification tasks. A Twin XCBR System Using Supportive and Contrastive Explanations