Vis enkel innførsel

dc.contributor.advisorLe Moan, Steven
dc.contributor.advisorAmirshahi, Seyed Ali
dc.contributor.authorNguyen, Ha Thu
dc.date.accessioned2023-09-28T17:24:36Z
dc.date.available2023-09-28T17:24:36Z
dc.date.issued2023
dc.identifierno.ntnu:inspera:147335080:94551832
dc.identifier.urihttps://hdl.handle.net/11250/3092862
dc.description.abstract
dc.description.abstractImage quality assessment has been an active research field for decades because of the high demand for images and video content in daily life. As visual information is processed in various steps from acquisition and storage to transmission, they are often degraded by multiple types of distortions. It is necessary to evaluate the quality of any imaging system to maintain the user’s experience. Thus, objective image quality assessments were proposed to objectively evaluate the image quality as close to the perceptual quality rated by human users. Among the three types of image quality assessment, No-Reference image quality assessment (NR-IQA) has the most potential to be used in various applications and is also the most challenging topic. The traditional NR-IQA metrics were proposed using domain knowledge of natural images to extract hand-crafted features that can indicate the degradation degree of the distorted image. Recently, many deep learning models have been used in NR-IQA and outperform the traditional method in predicting image quality. However, they are still data-driven models which contain numerous parameters and lack explainability. Therefore, it is challenging to understand how such deep NR-IQA models estimate the quality of images and why they do not work on some images. Moreover, although many different methods of explaining a deep learning model have been introduced, there is no work that targets to image quality assessment. In this work, we address the research gap in the explanation for the deep NRIQA model. Firstly, we defined a set of definitions and expectations for explainable artificial intelligence (XAI) in the field of image quality assessment. Then, we proposed a framework to provide explanations at different levels: from global to local prediction for the model. The global explanations were formed by analyzing the images that the model can not predict their quality accurately. To find such an image, we proposed to use objective detection methods for IQA models. We also used different existing XAI methods to obtain explanations for the model in different information domains from spatial, and frequency to color space. Different explanation results are discussed in our project. We found out that the existing XAI methods can explain NR-IQA models to some extent. However, there is no current way to evaluate the effectiveness of those explanations for image quality assessment problems. Future work is needed to provide an objective evaluation of XAI for image quality assessment or to find an alternative method to better explain NR-IQA models.
dc.languageeng
dc.publisherNTNU
dc.titleExplainable Artificial Intelligence for Image Quality Assessment
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel