Vis enkel innførsel

dc.contributor.advisorPedersen, Marius
dc.contributor.advisorYang, Bian
dc.contributor.authorKitanovski, Vlado
dc.date.accessioned2019-08-29T09:03:10Z
dc.date.available2019-08-29T09:03:10Z
dc.date.issued2019
dc.identifier.isbn978-82-326-3627-3
dc.identifier.issn1503-8181
dc.identifier.urihttp://hdl.handle.net/11250/2611546
dc.description.abstractTechniques for embedding imperceptible data in printed images have been studied for the past few decades. The hidden data in printed images can be used to bridge the gap between printed media and the digital world in applications like marketing, ticketing, packaging, hardcopy magazines, or supply chain management. It can also be used as a security feature in certificates, transactional documents, IDs, passports, tickets, vouchers, or other documents of value. Printing normally introduces distortion to the data hidden in continuous-tone images, so data hiding techniques that do not account for the impact from the printing channel may not be optimal in terms of capacity, perceptibility, or robustness of embedded data. The halftoning process in a typical printing workflow can be seen as a heavy quantization that could have severe impact on embedded data in continuous-tone images. This research project investigates methods for data hiding in color printed halftone images using commonly available digital printing devices, in order to open space for developing novel applications or to enable additional functionalities to existing applications. The main challenges in the development of a data hiding framework are achieving imperceptibility of the embedding distortion and reliable extraction of the embedded data. In this thesis, we develop methods for data hiding in color halftone images, based on a halftoning framework that uses models for color perception and ensures high quality of the halftone texture. First, we introduce a method for high capacity embedding of data as oriented texture features in the chrominance channels of halftone images. We develop a detection metric for computational extraction of embedded data from printed-andscanned images. We analyze the embedded oriented features in order to obtain suitable extractors that are robust to the print-and-scan process. A statistical model of the chrominance print-and-scan channel is introduced, which is used for maximum likelihood decision during data detection. The performance of the developed data hiding framework is demonstrated in a practical scenario, using an extensive set of testing images and common capture devices such as compact and smartphone cameras. Next, we develop two methods for hiding a visual UV watermark in color images printed using common printers and media. The watermark is revealed using UV illumination due to the fluorescence of whitening agents in paper substrates. In the first method, a UV watermark is embedded in CMY printed images, by dynamically modulating the fractional white unprinted area that is highly correlated to the fluorescent response. The model used for CMY printer characterization is fully measurements-based and not practical for printers with more than three colorants. We address this challenge by proposing a lightweight model for color halftone reproduction that requires reasonably low number of measurements. We also address the challenge of extending the range of colors suitable for embedding UV watermarks. The second developed method uses all of the printable primaries for modulation of the fluorescent response, and achieves improved perceptibility properties of the embedded watermark. Finally, we investigate the impact from visual masking to the perceptibility of embedding chrominance distortion in natural images. A visual experiment is performed for collecting thresholds of just perceptible distortion in each of the two chrominance channels. We analyze the correlations between collected thresholds and several low-level image features. Three different regression approaches are investigated for thresholds prediction from low-level image features that were observed to have impact on the thresholds. A computational model for predicting the perceptibility of grayscale image differences is extended to color, specifically to predict the perceptibility of chrominance image differences. The prediction performances of the computational model and the three regression models are evaluated using the collected thresholds as ground-truth.nb_NO
dc.language.isoengnb_NO
dc.publisherNTNUnb_NO
dc.relation.ispartofseriesDoctoral theses at NTNU;2019:8
dc.titleHalftone modulations for data hiding in color printed imagesnb_NO
dc.typeDoctoral thesisnb_NO
dc.subject.nsiVDP::Technology: 500::Information and communication technology: 550::Computer technology: 551nb_NO
dc.description.localcodedigital fulltext is not avialablenb_NO


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel