English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB

Mueller, F., Bernard, F., Sotnychenko, O., Mehta, D., Sridhar, S., Casas, D., et al. (2017). GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB. Retrieved from http://arxiv.org/abs/1712.01057.

Item is

Basic

show hide
Genre: Paper
Latex : {GANerated} Hands for Real-time {3D} Hand Tracking from Monocular {RGB}

Files

show Files
hide Files
:
arXiv:1712.01057.pdf (Preprint), 8MB
Name:
arXiv:1712.01057.pdf
Description:
File downloaded from arXiv at 2018-02-02 13:39
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Mueller, Franziska1, Author           
Bernard, Florian1, Author           
Sotnychenko, Oleksandr1, Author           
Mehta, Dushyant1, Author           
Sridhar, Srinath2, Author           
Casas, Dan2, Author           
Theobalt, Christian1, Author                 
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV
 Abstract: We address the highly challenging problem of real-time 3D hand tracking based
on a monocular RGB-only sequence. Our tracking method combines a convolutional
neural network with a kinematic 3D hand model, such that it generalizes well to
unseen data, is robust to occlusions and varying camera viewpoints, and leads
to anatomically plausible as well as temporally smooth hand motions. For
training our CNN we propose a novel approach for the synthetic generation of
training data that is based on a geometrically consistent image-to-image
translation network. To be more specific, we use a neural network that
translates synthetic images to "real" images, such that the so-generated images
follow the same statistical distribution as real-world hand images. For
training this translation network we combine an adversarial loss and a
cycle-consistency loss with a geometric consistency loss in order to preserve
geometric properties (such as hand pose) during translation. We demonstrate
that our hand tracking system outperforms the current state-of-the-art on
challenging RGB-only footage.

Details

show
hide
Language(s): eng - English
 Dates: 2017-12-042017
 Publication Status: Published online
 Pages: 13 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1712.01057
URI: http://arxiv.org/abs/1712.01057
BibTex Citekey: Mueller_arXiv1712.01057
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show