Vis enkel innførsel

dc.contributor.authorLeonardi, Marco
dc.contributor.authorFiori, Luca
dc.contributor.authorStahl, Annette
dc.date.accessioned2021-09-10T05:51:23Z
dc.date.available2021-09-10T05:51:23Z
dc.date.created2020-09-18T12:14:22Z
dc.date.issued2020
dc.identifier.issn2405-8963
dc.identifier.urihttps://hdl.handle.net/11250/2775039
dc.description.abstractMost visual odometry (VO) and visual simultaneous localization and mapping (VSLAM) systems rely heavily on robust keypoint detection and matching. With regards to images taken in the underwater environment, phenomena like shallow water caustics and/or dynamic objects like fishes can lead to the detection and matching of unreliable (unsuitable) keypoints within the visual motion estimation pipeline. We propose a plug-and-play keypoint rejection system that rejects keypoints unsuitable for tracking in order to obtain a robust visual ego-motion estimation. A convolutional neural network is trained in a supervised manner, with image patches having a detected keypoint in its center as input and the probability of such a keypoint suitable for tracking and mapping as output. We provide experimental evidence that the system prevents to track unsuitable keypoints in a state-of-the-art VSLAM system. In addition we evaluated several strategies aimed at increasing the inference speed of the network for real-time operations.en_US
dc.language.isoengen_US
dc.publisherInternational Federation of Automatic Control (IFAC)en_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.subjectRobotsynen_US
dc.subjectRobotic Visionen_US
dc.subjectUndervannsrobotikken_US
dc.subjectUnderwater roboticsen_US
dc.subjectRobotikken_US
dc.subjectRoboticsen_US
dc.subjectMaskinlæringen_US
dc.subjectMachine learningen_US
dc.titleDeep learning based keypoint rejection system for underwater visual ego-motion estimationen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.journalIFAC-PapersOnLineen_US
dc.identifier.doihttp://dx.doi.org/10.1016/j.ifacol.2020.12.2420
dc.identifier.cristin1831152
dc.relation.projectNorges forskningsråd: 223254en_US
dc.relation.projectNorges forskningsråd: 304667en_US
dc.relation.projectNorges forskningsråd: 262741en_US
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal