• norsk
    • English
  • English 
    • norsk
    • English
  • Login
View Item 
  •   Home
  • Øvrige samlinger
  • Publikasjoner fra CRIStin - NTNU
  • View Item
  •   Home
  • Øvrige samlinger
  • Publikasjoner fra CRIStin - NTNU
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Interpretable Fault Detection Approach With Deep Neural Networks to Industrial Applications

Kakavandi, Fatemeh; Han, Peihua; Reus, Roger de; Larsen, Peter Gorm; Zhang, Houxiang
Chapter
Submitted version
Thumbnail
View/Open
ICCAD23_paper.pdf (2.020Mb)
URI
https://hdl.handle.net/11250/3107806
Date
2023
Metadata
Show full item record
Collections
  • Institutt for havromsoperasjoner og byggteknikk [1086]
  • Publikasjoner fra CRIStin - NTNU [41869]
Original version
10.1109/ICCAD57653.2023.10152435
Abstract
Different explainable techniques have been introduced to overcome the challenges in complex machine learning models, such as uncertainty and lack of interpretability in sensitive processes. This paper presents an interpretable deep-leaning-based fault detection approach for two separate but relatively sensitive use cases. The first use case includes a vessel engine that aims to replicate a real-life ferry crossing. Furthermore, the second use case is an industrial, medical device assembly line that mounts and engages different product components. In this approach, first, we investigate two deep-learning models that can classify the samples as normal and abnormal. Then different explainable algorithms are studied to explain the prediction outcome for both models. Furthermore, the quantitative and qualitative evaluations of these methods are also carried on. Ultimately the deep learning model with the best-performing explainable algorithm is chosen as the final interpretable fault detector. However, depending on the use case, diverse classifiers and explainable techniques should be selected. For example, for the fault detection of the medical device assembly, the DeepLiftShap algorithm is most aligned with the expert knowledge and therefore has higher qualitative results. On the other hand, the Occlusion algorithm has lower sensitivity, and therefore, higher quantitative results. Consequently, choosing the final explainable algorithm compromises the qualitative and quantitative performance of the method.
Publisher
IEEE

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit
 

 

Browse

ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDocument TypesJournalsThis CollectionBy Issue DateAuthorsTitlesSubjectsDocument TypesJournals

My Account

Login

Statistics

View Usage Statistics

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit