Improved Classification of Underwater Hyperspectral Images using Sensor Fusion
MetadataShow full item record
- Institutt for marin teknikk 
The underwater hyperspectral imager allows for acquiring high spectral resolution images under water. The hyperspectral images may be used for classification of objects under water, and applications for the imager are being discovered and developed. One drawback with the hyperspectral imager is the loss of geometric information due to the slit-scan approach to capturing images. Currently, the spectral fingerprints of measured objects are used for classification, while the geometric characteristics are usually not considered. The traditional high spatial resolution RGB-cameras have poor spectral resolution compared to the underwater hyperspectral imager, but are able to conserve the geometric characteristics of the measured objects. In this thesis, we have investigated approaches for sensor fusion between the hyperspectral images and the still images from an RGB-camera, using image processing tools. The ORB feature detector has been tested for matching features between the images, and a new approach for matching features is presented and tested. The feature matching approaches has been tested on three different datasets. The approaches for feature matching had some success on the best suited datasets. One dataset contained some flawed data in form of unfocused images. On this dataset, the feature matching was not successful. Prior to the image analysis, the images from all datasets were georectified using navigation data recorded during the data acquisition. The georectification was showing reasonable results for both the hyperspectral transects and the still images. By comparing the georectified images, the displacement between corresponding objects in the still images and hyperspectral images were determined. These errors were small enough for manually recognizing the scenes from each image, but still significant with respect to automatic matching of objects. An approach for estimating navigation data entirely based on the overlapping still images were tested on two of the datasets. The georectification based on estimated navigation data showed poor results globally, drifting far from the real positions. The local error between single objects in the still image and hyperspectral image however, was in some cases smaller than in the original georectification.