dc.contributor.author | Karacan, Levent | |
dc.date.accessioned | 2024-01-19T10:56:13Z | |
dc.date.available | 2024-01-19T10:56:13Z | |
dc.date.issued | 2023 | en_US |
dc.identifier.citation | Karacan, L. (2023). Trainable Self-Guided Filter for Multi-Focus Image Fusion. IEEE Access, 11, pp. 139466-139477. | en_US |
dc.identifier.issn | 2169-3536 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12508/3047 | |
dc.description.abstract | Cameras are limited in their ability to capture all-in-focus images due to their limited depth of field. This results in blurriness for objects too far in front of or behind the focused point. To overcome this limitation, multi-focus image fusion (MFIF) approaches have been proposed. Although recent MFIF methods have shown promising results for this task, they still need to be improved in terms of artifacts and color degradation. Motivated by these observations, in this paper, we propose a new Generative Adversarial Network (GAN)-based MFIF model to improve fusion quality by predicting more accurate focus maps thanks to a trainable guided filter we incorporated. The proposed model comprises an encoder-decoder network, and a trainable self-guided filtering (TSGF) module that is specifically designed to enhance spatial consistency in the predicted focus map and to eliminate the requirements of post-processing in existing GAN-based methods. The encoder-decoder network first predicts raw focus maps, which are then passed to the TSGF to produce the final focus maps. To train the proposed model effectively, we define three objectives: L1 loss, GAN loss, and Focal Frequency Loss (FFL) in the frequency domain. L1 loss is defined on ground-truth and predicted focus maps, whereas GAN loss and FFL are defined on ground-truth all-in-focus images and fused images. Experimental results show that the proposed approach outperforms the existing GAN-based methods and achieves highly competitive performance with state-of-the-art methods in terms of standard quantitative image fusion metrics and visual quality on three MFIF benchmark datasets. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
dc.relation.isversionof | 10.1109/ACCESS.2023.3335307 | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Generative adversarial networks | en_US |
dc.subject | Guided filter | en_US |
dc.subject | Multi-focus image fusion | en_US |
dc.subject.classification | Computer Science | |
dc.subject.classification | Engineering | |
dc.subject.classification | Telecommunications | |
dc.subject.classification | Generative | |
dc.subject.classification | Computer Vision | |
dc.subject.classification | Source Domain | |
dc.subject.other | Bandpass filters | |
dc.subject.other | Benchmarking | |
dc.subject.other | Image analysis | |
dc.subject.other | Object recognition | |
dc.subject.other | Signal encoding | |
dc.subject.other | All-in-focus image | |
dc.subject.other | Encoder-decoder | |
dc.subject.other | Focus maps | |
dc.subject.other | Generator | |
dc.subject.other | Guided filters | |
dc.subject.other | Image color analysis | |
dc.subject.other | Information filter | |
dc.subject.other | Multifocus image fusion | |
dc.subject.other | Objects recognition | |
dc.subject.other | Task analysis | |
dc.title | Trainable Self-Guided Filter for Multi-Focus Image Fusion | en_US |
dc.type | article | en_US |
dc.relation.journal | IEEE Access | en_US |
dc.contributor.department | Mühendislik ve Doğa Bilimleri Fakültesi -- Bilgisayar Mühendisliği Bölümü | en_US |
dc.identifier.volume | 11 | en_US |
dc.identifier.startpage | 139466 | en_US |
dc.identifier.endpage | 139477 | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.contributor.isteauthor | Karacan, Levent | |
dc.relation.index | Web of Science - Scopus | en_US |
dc.relation.index | Web of Science Core Collection - Science Citation Index Expanded | |