Show simple item record

dc.contributor.authorDjenouri, Youcef
dc.contributor.authorHatleskog, Johan
dc.contributor.authorHjelmervik, Jon M.
dc.contributor.authorBjorne, Elias
dc.contributor.authorUtstumo, Trygve
dc.contributor.authorMobarhan, Milad
dc.date.accessioned2022-02-04T08:17:42Z
dc.date.available2022-02-04T08:17:42Z
dc.date.created2021-12-02T10:26:53Z
dc.date.issued2021
dc.identifier.citationApplied intelligence (Boston). 2021, .en_US
dc.identifier.issn0924-669X
dc.identifier.urihttps://hdl.handle.net/11250/2977039
dc.description.abstractIn the heavy asset industry, such as oil & gas, offshore personnel need to locate various equipment on the installation on a daily basis for inspection and maintenance purposes. However, locating equipment in such GPS denied environments is very time consuming due to the complexity of the environment and the large amount of equipment. To address this challenge we investigate an alternative approach to study the navigation problem based on visual imagery data instead of current ad-hoc methods where engineering drawings or large CAD models are used to find equipment. In particular, this paper investigates the combination of deep learning and decomposition for the image retrieval problem which is central for visual navigation. A convolutional neural network is first used to extract relevant features from the image database. The database is then decomposed into clusters of visually similar images, where several algorithms have been explored in order to make the clusters as independent as possible. The Bag-of-Words (BoW) approach is then applied on each cluster to build a vocabulary forest. During the searching process the vocabulary forest is exploited to find the most relevant images to the query image. To validate the usefulness of the proposed framework, intensive experiments have been carried out using both standard datasets and images from industrial environments. We show that the suggested approach outperforms the BoW-based image retrieval solutions, both in terms of computing time and accuracy. We also show the applicability of this approach on real industrial scenarios by applying the model on imagery data from offshore oil platforms.en_US
dc.language.isoengen_US
dc.publisherSpringeren_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleDeep learning based decomposition for visual navigation in industrial platformsen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber17en_US
dc.source.journalApplied intelligence (Boston)en_US
dc.identifier.doi10.1007/s10489-021-02908-z
dc.identifier.cristin1963189
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode2


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal