Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  A Novel BiLevel Paradigm for Image-to-Image Translation

Ma, L., Sun, Q., Schiele, B., & Van Gool, L. (2019). A Novel BiLevel Paradigm for Image-to-Image Translation. Retrieved from http://arxiv.org/abs/1904.09028.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1904.09028.pdf (Preprint), 3MB
Name:
arXiv:1904.09028.pdf
Beschreibung:
File downloaded from arXiv at 2019-06-06 13:13
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Ma, Liqian1, Autor
Sun, Qianru2, Autor           
Schiele, Bernt2, Autor           
Van Gool, Luc1, Autor
Affiliations:
1External Organizations, ou_persistent22              
2Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: Image-to-image (I2I) translation is a pixel-level mapping that requires a
large number of paired training data and often suffers from the problems of
high diversity and strong category bias in image scenes. In order to tackle
these problems, we propose a novel BiLevel (BiL) learning paradigm that
alternates the learning of two models, respectively at an instance-specific
(IS) and a general-purpose (GP) level. In each scene, the IS model learns to
maintain the specific scene attributes. It is initialized by the GP model that
learns from all the scenes to obtain the generalizable translation knowledge.
This GP initialization gives the IS model an efficient starting point, thus
enabling its fast adaptation to the new scene with scarce training data. We
conduct extensive I2I translation experiments on human face and street view
datasets. Quantitative results validate that our approach can significantly
boost the performance of classical I2I translation models, such as PG2 and
Pix2Pix. Our visualization results show both higher image quality and more
appropriate instance-specific details, e.g., the translated image of a person
looks more like that person in terms of identity.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2019-04-182019
 Publikationsstatus: Online veröffentlicht
 Seiten: 10 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1904.09028
URI: http://arxiv.org/abs/1904.09028
BibTex Citekey: Ma_arXiv1904.09028
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: