Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

EndNote (UTF-8)
 
DownloadE-Mail
  Manipulating Attributes of Natural Scenes via Hallucination

Karacan, L., Akata, Z., Erdem, A., & Erdem, E. (2018). Manipulating Attributes of Natural Scenes via Hallucination. Retrieved from http://arxiv.org/abs/1808.07413.

Item is

Basisdaten

ausblenden:
Genre: Forschungspapier

Dateien

ausblenden: Dateien
:
arXiv:1808.07413.pdf (Preprint), 9MB
Name:
arXiv:1808.07413.pdf
Beschreibung:
File downloaded from arXiv at 2018-09-17 09:59
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

ausblenden:
 Urheber:
Karacan, Levent1, Autor
Akata, Zeynep2, Autor           
Erdem, Aykut1, Autor
Erdem, Erkut1, Autor
Affiliations:
1External Organizations, ou_persistent22              
2Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_1116547              

Inhalt

ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: In this study, we explore building a two-stage framework for enabling users to directly manipulate high-level attributes of a natural scene. The key to our approach is a deep generative network which can hallucinate images of a scene as if they were taken at a different season (e.g. during winter), weather condition (e.g. in a cloudy day) or time of the day (e.g. at sunset). Once the scene is hallucinated with the given attributes, the corresponding look is then transferred to the input image while preserving the semantic details intact, giving a photo-realistic manipulation result. As the proposed framework hallucinates what the scene will look like, it does not require any reference style image as commonly utilized in most of the appearance or style transfer approaches. Moreover, it allows to simultaneously manipulate a given scene according to a diverse set of transient attributes within a single model, eliminating the need of training multiple networks per each translation task. Our comprehensive set of qualitative and quantitative results demonstrate the effectiveness of our approach against the competing methods.

Details

ausblenden:
Sprache(n): eng - English
 Datum: 2018-08-222018
 Publikationsstatus: Online veröffentlicht
 Seiten: 15 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1808.07413
URI: http://arxiv.org/abs/1808.07413
BibTex Citekey: Karacan_2018
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: