Vis enkel innførsel

dc.contributor.advisorLudvigsen, Martin
dc.contributor.advisorJohnsen, Geir
dc.contributor.advisorAas, Lars Martin
dc.contributor.authorOwren, Lars Berge
dc.date.accessioned2019-09-11T08:51:44Z
dc.date.created2018-01-14
dc.date.issued2018
dc.identifierntnudaim:18170
dc.identifier.urihttp://hdl.handle.net/11250/2615061
dc.description.abstractThe underwater hyperspectral imager allows for acquiring high spectral resolution images under water. The hyperspectral images may be used for classification of objects under water, and applications for the imager are being discovered and developed. One drawback with the hyperspectral imager is the loss of geometric information due to the slit-scan approach to capturing images. Currently, the spectral fingerprints of measured objects are used for classification, while the geometric characteristics are usually not considered. The traditional high spatial resolution RGB-cameras have poor spectral resolution compared to the underwater hyperspectral imager, but are able to conserve the geometric characteristics of the measured objects. In this thesis, we have investigated approaches for sensor fusion between the hyperspectral images and the still images from an RGB-camera, using image processing tools. The ORB feature detector has been tested for matching features between the images, and a new approach for matching features is presented and tested. The feature matching approaches has been tested on three different datasets. The approaches for feature matching had some success on the best suited datasets. One dataset contained some flawed data in form of unfocused images. On this dataset, the feature matching was not successful. Prior to the image analysis, the images from all datasets were georectified using navigation data recorded during the data acquisition. The georectification was showing reasonable results for both the hyperspectral transects and the still images. By comparing the georectified images, the displacement between corresponding objects in the still images and hyperspectral images were determined. These errors were small enough for manually recognizing the scenes from each image, but still significant with respect to automatic matching of objects. An approach for estimating navigation data entirely based on the overlapping still images were tested on two of the datasets. The georectification based on estimated navigation data showed poor results globally, drifting far from the real positions. The local error between single objects in the still image and hyperspectral image however, was in some cases smaller than in the original georectification.en
dc.languageeng
dc.publisherNTNU
dc.subjectIngeniørvitenskap og IKT, IKT og marin teknikken
dc.titleImproved Classification of Underwater Hyperspectral Images using Sensor Fusionen
dc.typeMaster thesisen
dc.source.pagenumber81
dc.contributor.departmentNorges teknisk-naturvitenskapelige universitet, Fakultet for ingeniørvitenskap,Institutt for marin teknikknb_NO
dc.date.embargoenddate10000-01-01


Tilhørende fil(er)

Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel