Basit öğe kaydını göster

dc.contributor.authorKaracan, Levent
dc.date.accessioned2024-01-19T10:56:13Z
dc.date.available2024-01-19T10:56:13Z
dc.date.issued2023en_US
dc.identifier.citationKaracan, L. (2023). Trainable Self-Guided Filter for Multi-Focus Image Fusion. IEEE Access, 11, pp. 139466-139477.en_US
dc.identifier.issn2169-3536
dc.identifier.urihttps://hdl.handle.net/20.500.12508/3047
dc.description.abstractCameras are limited in their ability to capture all-in-focus images due to their limited depth of field. This results in blurriness for objects too far in front of or behind the focused point. To overcome this limitation, multi-focus image fusion (MFIF) approaches have been proposed. Although recent MFIF methods have shown promising results for this task, they still need to be improved in terms of artifacts and color degradation. Motivated by these observations, in this paper, we propose a new Generative Adversarial Network (GAN)-based MFIF model to improve fusion quality by predicting more accurate focus maps thanks to a trainable guided filter we incorporated. The proposed model comprises an encoder-decoder network, and a trainable self-guided filtering (TSGF) module that is specifically designed to enhance spatial consistency in the predicted focus map and to eliminate the requirements of post-processing in existing GAN-based methods. The encoder-decoder network first predicts raw focus maps, which are then passed to the TSGF to produce the final focus maps. To train the proposed model effectively, we define three objectives: L1 loss, GAN loss, and Focal Frequency Loss (FFL) in the frequency domain. L1 loss is defined on ground-truth and predicted focus maps, whereas GAN loss and FFL are defined on ground-truth all-in-focus images and fused images. Experimental results show that the proposed approach outperforms the existing GAN-based methods and achieves highly competitive performance with state-of-the-art methods in terms of standard quantitative image fusion metrics and visual quality on three MFIF benchmark datasets.en_US
dc.language.isoengen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.relation.isversionof10.1109/ACCESS.2023.3335307en_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectGenerative adversarial networksen_US
dc.subjectGuided filteren_US
dc.subjectMulti-focus image fusionen_US
dc.subject.classificationComputer Science
dc.subject.classificationEngineering
dc.subject.classificationTelecommunications
dc.subject.classificationGenerative
dc.subject.classificationComputer Vision
dc.subject.classificationSource Domain
dc.subject.otherBandpass filters
dc.subject.otherBenchmarking
dc.subject.otherImage analysis
dc.subject.otherObject recognition
dc.subject.otherSignal encoding
dc.subject.otherAll-in-focus image
dc.subject.otherEncoder-decoder
dc.subject.otherFocus maps
dc.subject.otherGenerator
dc.subject.otherGuided filters
dc.subject.otherImage color analysis
dc.subject.otherInformation filter
dc.subject.otherMultifocus image fusion
dc.subject.otherObjects recognition
dc.subject.otherTask analysis
dc.titleTrainable Self-Guided Filter for Multi-Focus Image Fusionen_US
dc.typearticleen_US
dc.relation.journalIEEE Accessen_US
dc.contributor.departmentMühendislik ve Doğa Bilimleri Fakültesi -- Bilgisayar Mühendisliği Bölümüen_US
dc.identifier.volume11en_US
dc.identifier.startpage139466en_US
dc.identifier.endpage139477en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.contributor.isteauthorKaracan, Levent
dc.relation.indexWeb of Science - Scopusen_US
dc.relation.indexWeb of Science Core Collection - Science Citation Index Expanded


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster