dc.contributor.author | Karacan, Levent | |
dc.contributor.author | Akata, Zeynep | |
dc.contributor.author | Erdem, Aykut | |
dc.contributor.author | Erdem, Erkut | |
dc.date.accessioned | 2020-05-24T14:24:16Z | |
dc.date.available | 2020-05-24T14:24:16Z | |
dc.date.issued | 2019 | |
dc.identifier.citation | Karacan, L., Akata, Z., Erdem, A., Erdem, E. (2019). Manipulating attributes of natural scenes via hallucination. ACM Transactions on Graphics, 39(1),7. https://doi.org/10.1145/3368312 | en_US |
dc.identifier.issn | 0730-0301 | |
dc.identifier.uri | https://doi.org/10.1145/3368312 | |
dc.identifier.uri | https://hdl.handle.net/20.500.12508/1052 | |
dc.description.abstract | In this study, we explore building a two-stage framework for enabling users to directly manipulate high-level attributes of a natural scene. The key to our approach is a deep generative network that can hallucinate images of a scene as if they were taken in a different season (e.g., during winter), weather condition (e.g., on a cloudy day), or at a different time of the day (e.g., at sunset). Once the scene is hallucinated with the given attributes, the corresponding look is then transferred to the input image while preserving the semantic details intact, giving a photo-realistic manipulation result. As the proposed framework hallucinates what the scene will look like, it does not require any reference style image as commonly utilized in most of the appearance or style transfer approaches. Moreover, it allows to simultaneously manipulate a given scene according to a diverse set of transient attributes within a single model, eliminating the need of training multiple networks per each translation task. Our comprehensive set of qualitative and quantitative results demonstrates the effectiveness of our approach against the competing methods. © 2019 Association for Computing Machinery. | en_US |
dc.language.iso | eng | en_US |
dc.publisher | Association for Computing Machinery | en_US |
dc.relation.isversionof | 10.1145/3368312 | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Generative models | en_US |
dc.subject | Image generation | en_US |
dc.subject | Style transfer | en_US |
dc.subject | Visual attributes | en_US |
dc.subject.classification | Models | Computer vision | Deep generative | en_US |
dc.subject.classification | Computer Science | |
dc.subject.classification | Software Engineering | |
dc.subject.other | Computer graphics | en_US |
dc.subject.other | Generative model | en_US |
dc.subject.other | Image generations | en_US |
dc.subject.other | Multiple networks | en_US |
dc.subject.other | Natural scenes | en_US |
dc.subject.other | Photo-realistic | en_US |
dc.subject.other | Quantitative result | en_US |
dc.subject.other | Style transfer | en_US |
dc.subject.other | Visual attributes | en_US |
dc.subject.other | Semantics | en_US |
dc.title | Manipulating attributes of natural scenes via hallucination | en_US |
dc.type | article | en_US |
dc.relation.journal | ACM Transactions On Graphics | en_US |
dc.contributor.department | Mühendislik ve Doğa Bilimleri Fakültesi -- Bilgisayar Mühendisliği Bölümü | en_US |
dc.identifier.volume | 39 | en_US |
dc.identifier.issue | 1 | en_US |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.contributor.isteauthor | Karacan, Levent | en_US |
dc.relation.index | Web of Science - Scopus | en_US |
dc.relation.index | Web of Science Core Collection - Science Citation Index Expanded | |
dc.relation.index | Web of Science Core Collection - Social Sciences Citation Index | |