Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Single-Shot Multi-Person 3D Body Pose Estimation From Monocular RGB Input

Mehta, D., Sotnychenko, O., Mueller, F., Xu, W., Sridhar, S., Pons-Moll, G., et al. (2017). Single-Shot Multi-Person 3D Body Pose Estimation From Monocular RGB Input. Retrieved from http://arxiv.org/abs/1712.03453.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier
Latex : Single-Shot Multi-Person {3D} Body Pose Estimation From Monocular {RGB} Input

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1712.03453.pdf (Preprint), 8MB
Name:
arXiv:1712.03453.pdf
Beschreibung:
File downloaded from arXiv at 2018-02-01 10:36
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Mehta, Dushyant1, Autor           
Sotnychenko, Oleksandr1, Autor           
Mueller, Franziska1, Autor           
Xu, Weipeng1, Autor           
Sridhar, Srinath2, Autor
Pons-Moll, Gerard3, Autor           
Theobalt, Christian1, Autor           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              
3Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Zusammenfassung: We propose a new efficient single-shot method for multi-person 3D pose
estimation in general scenes from a monocular RGB camera. Our fully
convolutional DNN-based approach jointly infers 2D and 3D joint locations on
the basis of an extended 3D location map supported by body part associations.
This new formulation enables the readout of full body poses at a subset of
visible joints without the need for explicit bounding box tracking. It
therefore succeeds even under strong partial body occlusions by other people
and objects in the scene. We also contribute the first training data set
showing real images of sophisticated multi-person interactions and occlusions.
To this end, we leverage multi-view video-based performance capture of
individual people for ground truth annotation and a new image compositing for
user-controlled synthesis of large corpora of real multi-person images. We also
propose a new video-recorded multi-person test set with ground truth 3D
annotations. Our method achieves state-of-the-art performance on challenging
multi-person scenes.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2017-12-092017
 Publikationsstatus: Online veröffentlicht
 Seiten: 11 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1712.03453
URI: http://arxiv.org/abs/1712.03453
BibTex Citekey: Mehta1712.03453
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: