HyperSpectral Imaging (HSI) is a vital tool to many industries and fields. It is
however very costly, time consuming, and in need of dedicated hardware. Lots of
research was dedicated to find alternatives to traditional HSI systems. One of the
most promising ones is RGB to Hyperspectral reconstruction. These models are
usually CNNs that take in a single RGB image and estimates the hyperspectral
image for the same scene (in the visible range). Such models can dramatically cut
on costs and time needed to acquire a hyperspectral image given the availability
and ease of acquiring RGB images.
However, to fully adopt such models we need to establish trust in them (or
distrust). To do that, we need to understand and explain how these models work on
a fundamental level at least. This is especially the case because these models deal
with a highly ill-posed problem of mapping only 3 RGB bands into a much larger
number of bands (typically 31) to perform this estimation. Users do not have any
evidence of how these models actually do that and how they are able to estimate
the illuminant of the scene to avoid metameric effects and how they perform the
’one-to-many’ mapping involved. In this thesis, we work on filling this major gap.
We take 7 of the most prominent RGB to hyperspectral reconstruction models and
apply many explainable AI (XAI) methods to try to understand how they work. We
classify these models based on the different ways they perform the reconstruction.
We establish points of failure where some or all models cannot perform as expected.
We establish their spatial feature area in the input image. We try to find what
kind of parameters and features they use and where in the network they use them.
We present a theory on how they do illuminant estimation and present supporting
evidences for that theory. Finally, we bring all tests together and try to break
down these models into more simple sub-models that could be replicated by simpler
explainable equivalents. We also introduce novel modifications to existing XAI
methods that allows them to be used in any hyperspectral model explainability
project in the future.
The outcomes of this work support that these models work in an intelligible
manner. Meaning that they could be understood and equated by other explainable
models. However, these models cannot be trusted all the time, since the work
shows that they fail consistently under certain conditions. This work does not fully
explain these models since some aspects are still unclear, but it does explain many
important parts and paves the way for a clearer understanding of these networks.