Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

HandSeg: An Automatically Labeled Dataset for Hand Segmentation from Depth Images

MPG-Autoren
/persons/resource/persons134216

Mueller,  Franziska
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Bojja, A. K., Mueller, F., Malireddi, S. R., Oberweger, M., Lepetit, V., Theobalt, C., et al. (2017). HandSeg: An Automatically Labeled Dataset for Hand Segmentation from Depth Images. Retrieved from http://arxiv.org/abs/1711.05944.


Zitierlink: https://hdl.handle.net/21.11116/0000-0000-6132-A
Zusammenfassung
We introduce a large-scale RGBD hand segmentation dataset, with detailed and
automatically generated high-quality ground-truth annotations. Existing
real-world datasets are limited in quantity due to the difficulty in manually
annotating ground-truth labels. By leveraging a pair of brightly colored gloves
and an RGBD camera, we propose an acquisition pipeline that eases the task of
annotating very large datasets with minimal human intervention. We then
quantify the importance of a large annotated dataset in this domain, and
compare the performance of existing datasets in the training of deep-learning
architectures. Finally, we propose a novel architecture employing strided
convolution/deconvolutions in place of max-pooling and unpooling layers. Our
variant outperforms baseline architectures while remaining computationally
efficient at inference time. Source and datasets will be made publicly
available.