English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

HandSeg: An Automatically Labeled Dataset for Hand Segmentation from Depth Images

MPS-Authors
/persons/resource/persons134216

Mueller,  Franziska
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Bojja, A. K., Mueller, F., Malireddi, S. R., Oberweger, M., Lepetit, V., Theobalt, C., et al. (2017). HandSeg: An Automatically Labeled Dataset for Hand Segmentation from Depth Images. Retrieved from http://arxiv.org/abs/1711.05944.


Cite as: https://hdl.handle.net/21.11116/0000-0000-6132-A
Abstract
We introduce a large-scale RGBD hand segmentation dataset, with detailed and
automatically generated high-quality ground-truth annotations. Existing
real-world datasets are limited in quantity due to the difficulty in manually
annotating ground-truth labels. By leveraging a pair of brightly colored gloves
and an RGBD camera, we propose an acquisition pipeline that eases the task of
annotating very large datasets with minimal human intervention. We then
quantify the importance of a large annotated dataset in this domain, and
compare the performance of existing datasets in the training of deep-learning
architectures. Finally, we propose a novel architecture employing strided
convolution/deconvolutions in place of max-pooling and unpooling layers. Our
variant outperforms baseline architectures while remaining computationally
efficient at inference time. Source and datasets will be made publicly
available.