Vis enkel innførsel

dc.contributor.advisorLekkas, Anastasios
dc.contributor.advisorStrümke, Inga
dc.contributor.authorGjærum, Vilde Benoni
dc.date.accessioned2023-05-19T11:21:27Z
dc.date.available2023-05-19T11:21:27Z
dc.date.issued2023
dc.identifier.isbn978-82-326-6955-4
dc.identifier.issn2703-8084
dc.identifier.urihttps://hdl.handle.net/11250/3068342
dc.description.abstractArtifcial intelligence (AI) and machine learning (ML) offer a number of benefits in multiple applications within the field of robotics, such as computer vision, object grasping, motion control, and planning. Although AI methods can boost performance in many robotic tasks, these methods’ utility value is limited by the fact that humans struggle to understand how these methods operate. AI or ML models that are so complex that we cannot understand them are called black boxes. Our lack of understanding of these black boxes can lead to a lack of trust in systems working perfectly well or too much trust in systems that might not be trustworthy. Additionally, understanding the black boxes can help us improve them, detect their weaknesses and thus better assess which scenarios the black box can be applied to in a safe manner and ensure that the black box obeys laws and regulations. These are some of the shortcomings of AI that the field of explainable artificial intelligence (XAI) addresses. This thesis presents topics related to XAI in robotics. The main part of the thesis is a collection of four peer-reviewed papers, two journal papers and two conference papers. Additionally, one submitted conference paper is included. In addition to the paper collection, the first part of the thesis contains an introduction to the thesis as well as an introduction of the main topics of the thesis, namely ML in robotics, XAI and linear model treess (LMTs). This first part provides context to the publications and puts the different publications in relation to each other. In this thesis, LMTs are used as an XAI method. LMTs are decision trees (DTs) with a linear prediction function in the leaf nodes. The LMTs divides the input space into distinct regions and fits a linear function to each region, and the LMTs thus makes out a piece-wise linear function approximator. The LMTs can be used as an XAI method by approximating the black box and subsequently analysing the LMT to gain a better understanding of the black box. The first thing that needs to be done when using LMTs for explainability is to build the tree to approximate the black box. To do so, we must gather data from the world and collect the corresponding output responses from the black box. We then use this dataset to build the LMT in a supervised manner. The validity of the explanations depends on how similar the LMT is to the black box, so great care must be taken when gathering the dataset and building the tree. We found that introducing domain knowledge to the building process improved the tree’s accuracy and building time. We use the LMTs as a post-hoc, model-agnostic surrogate model, which means that the LMTs is an XAI method that mimics any type of black box model that is already built. In addition to being able to give explanations in the form of feature attributions and counterfactuals, LMTs also is an explanation in itself since the trees’ structure and linear prediction functions represent the black box model in a simpler manner. We show that the LMTs are capable of generating feature attribution and counterfactuals in real-time, even for complex, robotic applications. Once the explanations have been generated, we must make sure the explanations are effectively communicated to the user of the AI system. How an explanation best can be communicated depends on the system to be explained, which application the system is used on, and who the recipient of the explanation is. We suggest two different visualizations of feature attributions to two different end-users based on their background knowledge and characteristics.en_US
dc.language.isoengen_US
dc.publisherNTNUen_US
dc.relation.ispartofseriesDoctoral theses at NTNU;2023:134
dc.relation.haspartPaper 1: Gjærum, Vilde Benoni; Rørvik, Ella-Lovise Hammervold; Lekkas, Anastasios M.. Approximating a deep reinforcement learning docking agent using linear model trees. I: Proceedings of the 2021 European Control Conference. IEEE conference proceedings 2021 ISBN 978-9-4638-4236-5. s. 1465-1471. Copyright © 2021 IEEE. Available at: http://dx.doi.org/10.23919/ECC54610.2021.9655007
dc.relation.haspartPaper 2: Løver, Jakob; Gjærum, Vilde Benoni; Lekkas, Anastasios M.. Explainable AI methods on a deep reinforcement learning agent for automatic docking. IFAC-PapersOnLine 2021 ;Volum 54.(16) s. 146-152. Copyright © 2021 The Authors. This is an open access article under the CC BY-NC-ND license. Available at: http://dx.doi.org/10.1016/j.ifacol.2021.10.086
dc.relation.haspartPaper 3: Gjærum, Vilde Benoni; Strümke, Inga; Alsos, Ole Andreas; Lekkas, Anastasios M.. Explaining a deep reinforcement learning docking agent using linear model trees with user adapted visualization. Journal of Marine Science and Engineering 2021 ;Volum 9.(11) s. -Copyright: © 2021 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Available at: http://dx.doi.org/10.3390/jmse9111178
dc.relation.haspartPaper 4: Gjærum, Vilde Benoni; Strümke, Inga; Løver, Jakob; Miller, Timothy; Lekkas, Anastasios M.. Model tree methods for explaining deep reinforcement learning agents in real-time robotic applications. Neurocomputing 2023 ;Volum 515. s. 133-144. © 2022 The Author(s). This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). Available: http://dx.doi.org/10.1016/j.neucom.2022.10.014
dc.relation.haspartPaper 5: Gjærum, Vilde Benoni; Strümke, Inga; Lekkas, Anastasios M.; Miller, Timothy; Lekkas. Real-Time Counterfactual Explanations For Robotic Systems With Multiple Continuous Outputs. Accepted to: The 22nd World Congress of the International Federation of Automatic Control (IFAC WC) (2023). Available at ArXiv: https://doi.org/10.48550/arXiv.2212.04212
dc.titleMachine learning in robotics: Explaining autonomous agents in real timeen_US
dc.typeDoctoral thesisen_US
dc.subject.nsiVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550::Teknisk kybernetikk: 553en_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel