Show simple item record

dc.contributor.authorNguyen, Dinh Huan
dc.contributor.authorAndersen, Rasmus
dc.contributor.authorBoukas, Evangelos
dc.contributor.authorAlexis, Konstantinos
dc.date.accessioned2024-01-16T09:06:28Z
dc.date.available2024-01-16T09:06:28Z
dc.date.created2023-11-18T21:33:48Z
dc.date.issued2023
dc.identifier.issn0278-3649
dc.identifier.urihttps://hdl.handle.net/11250/3111674
dc.description.abstractAutonomous navigation and information gathering in challenging environments are demanding since the robot’s sensors may be susceptible to non-negligible noise, its localization and mapping may be subject to significant uncertainty and drift, and performing collision-checking or evaluating utility functions using a map often requires high computational costs. We propose a learning-based method to efficiently tackle this problem without relying on a map of the environment or the robot’s position. Our method utilizes a Collision Prediction Network (CPN) for predicting the collision scores of a set of action sequences, and an Information gain Prediction Network (IPN) for estimating their associated information gain. Both networks assume access to a) the depth image (CPN) or the depth image and the detection mask from any visual method (IPN), b) the robot’s partial state (including its linear velocities, z-axis angular velocity, and roll/pitch angles) and c) a library of action sequences. Specifically, the CPN accounts for the estimation uncertainty of the robot’s partial state and the neural network’s epistemic uncertainty by using the Unscented Transform and an ensemble of neural networks. The outputs of the networks are combined with a goal vector to identify the next-best action sequence. Simulation studies demonstrate the method’s robustness against noisy robot velocity estimates and depth images, alongside its advantages compared to state-of-the-art methods and baselines in (visually-attentive) navigation tasks. Lastly, multiple real-world experiments are presented, including safe flights at 2.5 m/s in a cluttered corridor, and missions inside a dense forest alongside visually-attentive navigation in industrial and university buildings.en_US
dc.language.isoengen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleUncertainty-Aware Visually-Attentive Navigation Using Deep Neural Networksen_US
dc.title.alternativeUncertainty-Aware Visually-Attentive Navigation Using Deep Neural Networksen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.journalThe international journal of robotics researchen_US
dc.identifier.cristin2198435
dc.relation.projectAndre: Air Force Office of Scientific Research FA8655-21-1-7033en_US
dc.relation.projectNorges forskningsråd: 321435en_US
cristin.ispublishedfalse
cristin.fulltextpostprint
cristin.qualitycode1


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal