Vis enkel innførsel

dc.contributor.authorVitale, Raffaele
dc.contributor.authorRuckebusch, Cyril
dc.contributor.authorBurud, Ingunn
dc.contributor.authorMartens, Harald Aagaard
dc.date.accessioned2023-01-24T07:19:24Z
dc.date.available2023-01-24T07:19:24Z
dc.date.created2022-05-04T12:59:18Z
dc.date.issued2022
dc.identifier.citationFrontiers in Chemistry. 2022, 10 1-17.en_US
dc.identifier.issn2296-2646
dc.identifier.urihttps://hdl.handle.net/11250/3045607
dc.description.abstractHyperspectral imaging has recently gained increasing attention from academic and industrial world due to its capability of providing both spatial and physico-chemical information about the investigated objects. While this analytical approach is experiencing a substantial success and diffusion in very disparate scenarios, far less exploited is the possibility of collecting sequences of hyperspectral images over time for monitoring dynamic scenes. This trend is mainly justified by the fact that these so-called hyperspectral videos usually result in BIG DATA sets, requiring TBs of computer memory to be both stored and processed. Clearly, standard chemometric techniques do need to be somehow adapted or expanded to be capable of dealing with such massive amounts of information. In addition, hyperspectral video data are often affected by many different sources of variations in sample chemistry (for example, light absorption effects) and sample physics (light scattering effects) as well as by systematic errors (associated, e.g., to fluctuations in the behaviour of the light source and/or of the camera). Therefore, identifying, disentangling and interpreting all these distinct sources of information represents undoubtedly a challenging task. In view of all these aspects, the present work describes a multivariate hybrid modelling framework for the analysis of hyperspectral videos, which involves spatial, spectral and temporal parametrisations of both known and unknown chemical and physical phenomena underlying complex real-world systems. Such a framework encompasses three different computational steps: 1) motions ongoing within the inspected scene are estimated by optical flow analysis and compensated through IDLE modelling; 2) chemical variations are quantified and separated from physical variations by means of Extended Multiplicative Signal Correction (EMSC); 3) the resulting light scattering and light absorption data are subjected to the On-The-Fly Processing and summarised spectrally, spatially and over time. The developed methodology was here tested on a near-infrared hyperspectral video of a piece of wood undergoing drying. It led to a significant reduction of the size of the original measurements recorded and, at the same time, provided valuable information about systematic variations generated by the phenomena behind the monitored process.en_US
dc.language.isoengen_US
dc.publisherFrontiers Mediaen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleHyperspectral Video Analysis by Motion and Intensity Preprocessing and Subspace Autoencodingen_US
dc.title.alternativeHyperspectral Video Analysis by Motion and Intensity Preprocessing and Subspace Autoencodingen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber1-17en_US
dc.source.volume10en_US
dc.source.journalFrontiers in Chemistryen_US
dc.identifier.doi10.3389/fchem.2022.818974
dc.identifier.cristin2021412
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal