Vis enkel innførsel

dc.contributor.authorBarbosa, I.
dc.contributor.authorCristani, Marco
dc.contributor.authorCaputo, Barbara
dc.contributor.authorRognhaugen, Aleksander
dc.contributor.authorTheoharis, Theoharis
dc.date.accessioned2019-02-12T14:15:22Z
dc.date.available2019-02-12T14:15:22Z
dc.date.created2018-09-28T16:00:14Z
dc.date.issued2018
dc.identifier.citationComputer Vision and Image Understanding. 2018, 167 50-62.nb_NO
dc.identifier.issn1077-3142
dc.identifier.urihttp://hdl.handle.net/11250/2585066
dc.description.abstractRe-identification is generally carried out by encoding the appearance of a subject in terms of outfit, suggesting scenarios where people do not change their attire. In this paper we overcome this restriction, by proposing a framework based on a deep convolutional neural network, SOMAnet, that additionally models other discriminative aspects, namely, structural attributes of the human figure (e.g. height, obesity, gender). Our method is unique in many respects. First, SOMAnet is based on the Inception architecture, departing from the usual siamese framework. This spares expensive data preparation (pairing images across cameras) and allows the understanding of what the network learned. Second, and most notably, the training data consists of a synthetic 100K instance dataset, SOMAset, created by photorealistic human body generation software. SOMAset will be released with a open source license to enable further developments in re-identification. Synthetic data represents a cost-effective way of acquiring semi-realistic imagery (full realism is usually not required in re-identification since surveillance cameras capture low-resolution silhouettes), while at the same time providing complete control of the samples in terms of ground truth. Thus it is relatively easy to customize the data w.r.t. the surveillance scenario at-hand, e.g. ethnicity. SOMAnet, trained on SOMAset and fine-tuned on recent re-identification benchmarks, matches subjects even with different apparel.nb_NO
dc.language.isoengnb_NO
dc.publisherElseviernb_NO
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.titleLooking beyond appearances: Synthetic training data for deep CNNs in re-identificationnb_NO
dc.typeJournal articlenb_NO
dc.typePeer reviewednb_NO
dc.description.versionacceptedVersionnb_NO
dc.source.pagenumber50-62nb_NO
dc.source.volume167nb_NO
dc.source.journalComputer Vision and Image Understandingnb_NO
dc.identifier.doi10.1016/j.cviu.2017.12.002
dc.identifier.cristin1615910
dc.description.localcode© 2017. This is the authors’ accepted and refereed manuscript to the article. Locked until 12.12.2019 due to copyright restrictions. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/nb_NO
cristin.unitcode194,63,10,0
cristin.unitnameInstitutt for datateknologi og informatikk
cristin.ispublishedtrue
cristin.fulltextpreprint
cristin.qualitycode2


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal