Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  IsMo-GAN: Adversarial Learning for Monocular Non-Rigid 3D Reconstruction

Shimada, S., Golyanik, V., Theobalt, C., & Stricker, D. (2019). IsMo-GAN: Adversarial Learning for Monocular Non-Rigid 3D Reconstruction. Retrieved from http://arxiv.org/abs/1904.12144.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1904.12144.pdf (Preprint), 6MB
Name:
arXiv:1904.12144.pdf
Beschreibung:
File downloaded from arXiv at 2019-07-09 10:16
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Shimada, Soshi1, Autor
Golyanik, Vladislav2, Autor           
Theobalt, Christian2, Autor           
Stricker, Didier1, Autor
Affiliations:
1External Organizations, ou_persistent22              
2Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: The majority of the existing methods for non-rigid 3D surface regression from
monocular 2D images require an object template or point tracks over multiple
frames as an input, and are still far from real-time processing rates. In this
work, we present the Isometry-Aware Monocular Generative Adversarial Network
(IsMo-GAN) - an approach for direct 3D reconstruction from a single image,
trained for the deformation model in an adversarial manner on a light-weight
synthetic dataset. IsMo-GAN reconstructs surfaces from real images under
varying illumination, camera poses, textures and shading at over 250 Hz. In
multiple experiments, it consistently outperforms several approaches in the
reconstruction accuracy, runtime, generalisation to unknown surfaces and
robustness to occlusions. In comparison to the state-of-the-art, we reduce the
reconstruction error by 10-30% including the textureless case and our surfaces
evince fewer artefacts qualitatively.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2019-04-272019
 Publikationsstatus: Online veröffentlicht
 Seiten: 13 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1904.12144
URI: http://arxiv.org/abs/1904.12144
BibTex Citekey: Shimada_arXiv1904.12144
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: