Automatic Adjusting Global Similarity Measures in Learning CBR Systems
Chapter
Accepted version
Permanent lenke
https://hdl.handle.net/11250/3150681Utgivelsesdato
2024Metadata
Vis full innførselSamlinger
Sammendrag
This paper explores how learning case-based reasoning (CBR) systems are affected by updating similarity measures. We create CBR systems using the local-global principle and we investigate (1) how adding new cases changes the CBR system’s performance and (2) how this drift can be mitigated through updating the similarity measure, especially adapting feature weights for weighted sums. We aim to provide transparent measures to show when the knowledge containers drift apart to indicate when an update is necessary. We, therefore, explore the effect feature weight has on predictive performance and the knowledge containers in online learning CBR systems. Following this, we present a method to minimize updating feature weights while the case base grows while maintaining performance. The performance is compared to two baselines: never updating and always updating. Our experiments with public datasets show that a smart updating strategy catches the drifting of case base content and similarity measures well. Automatic Adjusting Global Similarity Measures in Learning CBR Systems