Vis enkel innførsel

dc.contributor.authorKakavandi, Fatemeh
dc.contributor.authorHan, Peihua
dc.contributor.authorReus, Roger de
dc.contributor.authorLarsen, Peter Gorm
dc.contributor.authorZhang, Houxiang
dc.date.accessioned2023-12-15T13:01:02Z
dc.date.available2023-12-15T13:01:02Z
dc.date.created2023-08-04T14:43:50Z
dc.date.issued2023
dc.identifier.isbn979-8-3503-4707-4
dc.identifier.urihttps://hdl.handle.net/11250/3107806
dc.description.abstractDifferent explainable techniques have been introduced to overcome the challenges in complex machine learning models, such as uncertainty and lack of interpretability in sensitive processes. This paper presents an interpretable deep-leaning-based fault detection approach for two separate but relatively sensitive use cases. The first use case includes a vessel engine that aims to replicate a real-life ferry crossing. Furthermore, the second use case is an industrial, medical device assembly line that mounts and engages different product components. In this approach, first, we investigate two deep-learning models that can classify the samples as normal and abnormal. Then different explainable algorithms are studied to explain the prediction outcome for both models. Furthermore, the quantitative and qualitative evaluations of these methods are also carried on. Ultimately the deep learning model with the best-performing explainable algorithm is chosen as the final interpretable fault detector. However, depending on the use case, diverse classifiers and explainable techniques should be selected. For example, for the fault detection of the medical device assembly, the DeepLiftShap algorithm is most aligned with the expert knowledge and therefore has higher qualitative results. On the other hand, the Occlusion algorithm has lower sensitivity, and therefore, higher quantitative results. Consequently, choosing the final explainable algorithm compromises the qualitative and quantitative performance of the method.en_US
dc.language.isoengen_US
dc.publisherIEEEen_US
dc.relation.ispartof2023 International Conference on Control, Automation and Diagnosis (ICCAD)
dc.titleInterpretable Fault Detection Approach With Deep Neural Networks to Industrial Applicationsen_US
dc.title.alternativeInterpretable Fault Detection Approach With Deep Neural Networks to Industrial Applicationsen_US
dc.typeChapteren_US
dc.description.versionsubmittedVersionen_US
dc.source.pagenumber1-7en_US
dc.identifier.doi10.1109/ICCAD57653.2023.10152435
dc.identifier.cristin2164922
cristin.ispublishedtrue
cristin.fulltextpreprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel