• norsk
    • English
  • English 
    • norsk
    • English
  • Login
View Item 
  •   Home
  • Fakultet for informasjonsteknologi og elektroteknikk (IE)
  • Institutt for datateknologi og informatikk
  • View Item
  •   Home
  • Fakultet for informasjonsteknologi og elektroteknikk (IE)
  • Institutt for datateknologi og informatikk
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Explainable AI for RGB to HyperSpectral CNN Models

Issa, Hamzeh
Master thesis
Thumbnail
View/Open
no.ntnu:inspera:147335080:94577577.pdf (31.52Mb)
URI
https://hdl.handle.net/11250/3092865
Date
2023
Metadata
Show full item record
Collections
  • Institutt for datateknologi og informatikk [7453]
Abstract
 
 
HyperSpectral Imaging (HSI) is a vital tool to many industries and fields. It is

however very costly, time consuming, and in need of dedicated hardware. Lots of

research was dedicated to find alternatives to traditional HSI systems. One of the

most promising ones is RGB to Hyperspectral reconstruction. These models are

usually CNNs that take in a single RGB image and estimates the hyperspectral

image for the same scene (in the visible range). Such models can dramatically cut

on costs and time needed to acquire a hyperspectral image given the availability

and ease of acquiring RGB images.

However, to fully adopt such models we need to establish trust in them (or

distrust). To do that, we need to understand and explain how these models work on

a fundamental level at least. This is especially the case because these models deal

with a highly ill-posed problem of mapping only 3 RGB bands into a much larger

number of bands (typically 31) to perform this estimation. Users do not have any

evidence of how these models actually do that and how they are able to estimate

the illuminant of the scene to avoid metameric effects and how they perform the

’one-to-many’ mapping involved. In this thesis, we work on filling this major gap.

We take 7 of the most prominent RGB to hyperspectral reconstruction models and

apply many explainable AI (XAI) methods to try to understand how they work. We

classify these models based on the different ways they perform the reconstruction.

We establish points of failure where some or all models cannot perform as expected.

We establish their spatial feature area in the input image. We try to find what

kind of parameters and features they use and where in the network they use them.

We present a theory on how they do illuminant estimation and present supporting

evidences for that theory. Finally, we bring all tests together and try to break

down these models into more simple sub-models that could be replicated by simpler

explainable equivalents. We also introduce novel modifications to existing XAI

methods that allows them to be used in any hyperspectral model explainability

project in the future.

The outcomes of this work support that these models work in an intelligible

manner. Meaning that they could be understood and equated by other explainable

models. However, these models cannot be trusted all the time, since the work

shows that they fail consistently under certain conditions. This work does not fully

explain these models since some aspects are still unclear, but it does explain many

important parts and paves the way for a clearer understanding of these networks.
 
Publisher
NTNU

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit
 

 

Browse

ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDocument TypesJournalsThis CollectionBy Issue DateAuthorsTitlesSubjectsDocument TypesJournals

My Account

Login

Statistics

View Usage Statistics

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit