Vis enkel innførsel

dc.contributor.authorMa, Boyuan
dc.contributor.authorYin, Xiang
dc.contributor.authorWu, Di
dc.contributor.authorShen, Haokai
dc.contributor.authorBan, Xiaojuan
dc.contributor.authorWang, Yu
dc.date.accessioned2024-06-07T11:42:21Z
dc.date.available2024-06-07T11:42:21Z
dc.date.created2022-04-08T13:40:59Z
dc.date.issued2022
dc.identifier.citationNeurocomputing. 2022, 470 204-216.en_US
dc.identifier.issn0925-2312
dc.identifier.urihttps://hdl.handle.net/11250/3133100
dc.description.abstractThe general aim of multi-focus image fusion is to gather focused regions of different images to generate a unique all-in-focus fused image. Deep learning based methods become the mainstream of image fusion by virtue of its powerful feature representation ability. However, most of the existing deep learning structures failed to balance fusion quality and end-to-end implementation convenience. End-to-end decoder design often leads to unrealistic result because of its non-linear mapping mechanism. On the other hand, generating an intermediate decision map achieves better quality for the fused image, but relies on the rectification with empirical post-processing parameter choices. In this work, to handle the requirements of both output image quality and comprehensive simplicity of structure implementation, we propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure. It avoids the dependence on empirical post-processing methods in the inference stage. To improve the fusion quality, we introduce a gradient aware loss function to preserve gradient information in output fused image. In addition, we design a decision calibration strategy to decrease the time consumption in the application of multiple images fusion. Extensive experiments are conducted to compare with 19 different state-of-the-art multi-focus image fusion structures with 6 assessment metrics. The results prove that our designed structure can generally ameliorate the output fused image quality, while implementation efficiency increases over 30% for multiple images fusion.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.titleEnd-to-end learning for simultaneously generating decision map and multi-focus image fusion resulten_US
dc.title.alternativeEnd-to-end learning for simultaneously generating decision map and multi-focus image fusion resulten_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionacceptedVersionen_US
dc.rights.holder© Copyright 2021 Elsevieren_US
dc.source.pagenumber204-216en_US
dc.source.volume470en_US
dc.source.journalNeurocomputingen_US
dc.identifier.doi10.1016/j.neucom.2021.10.115
dc.identifier.cristin2016217
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode2


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel