Vis enkel innførsel

dc.contributor.authorRemman, Sindre Benjamin
dc.contributor.authorLekkas, Anastasios M.
dc.date.accessioned2022-04-04T10:29:54Z
dc.date.available2022-04-04T10:29:54Z
dc.date.created2021-09-30T11:11:36Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/11250/2989516
dc.description.abstractThis paper deals with robotic lever control using Explainable Deep Reinforcement Learning. First, we train a policy by using the Deep Deterministic Policy Gradient algorithm and the Hindsight Experience Replay technique, where the goal is to control a robotic manipulator to manipulate a lever. This enables us both to use continuous states and actions and to learn with sparse rewards. Being able to learn from sparse rewards is especially desirable for Deep Reinforcement Learning because designing a reward function for complex tasks such as this is challenging. We first train in the PyBullet simulator, which accelerates the training procedure, but is not accurate on this task compared to the real-world environment. After completing the training in PyBullet, we further train in the Gazebo simulator, which runs more slowly than PyBullet, but is more accurate on this task. We then transfer the policy to the real-world environment, where it achieves comparable performance to the simulated environments for most episodes. To explain the decisions of the policy we use the SHAP method to create an explanation model based on the episodes done in the real-world environment. This gives us some results that agree with intuition, and some that do not. We also question whether the independence assumption made when approximating the SHAP values influences the accuracy of these values for a system such as this, where there are some correlations between the states.en_US
dc.language.isoengen_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.titleRobotic Lever Manipulation using Hindsight Experience Replay and Shapley Additive Explanationsen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionacceptedVersionen_US
dc.rights.holder© IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.source.journalIEEE Xplore digital libraryen_US
dc.identifier.doi10.23919/ECC54610.2021.9654850
dc.identifier.cristin1941212
dc.relation.projectNorges forskningsråd: 223254en_US
dc.relation.projectNorges forskningsråd: 304843en_US
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode0


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel