English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Detecting Human-Object Contact in Images

MPS-Authors
/persons/resource/persons283577

Dwivedi,  Sai Kumar
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

/persons/resource/persons75293

Black,  Michael J.       
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Chen, Y., Dwivedi, S. K., Black, M. J., & Tzionas, D. (2023). Detecting Human-Object Contact in Images. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 17100-17110). New York, NY: IEEE. doi:10.1109/CVPR52729.2023.01640.


Cite as: https://hdl.handle.net/21.11116/0000-0010-8A64-2
Abstract
Humans constantly contact objects to move and perform tasks. Thus, detecting human-object contact is important for building human-centered artificial intelligence. However, there exists no robust method to detect contact between the body and the scene from an image, and there exists no dataset to learn such a detector. We fill this gap with HOT ("Human-Object conTact"), a new dataset of human-object contacts for images. To build HOT, we use two data sources: (1) We use the PROX dataset of 3D human meshes moving in 3D scenes, and automatically annotate 2D image areas for contact via 3D mesh proximity and projection. (2) We use the V-COCO, HAKE and Watch-n-Patch datasets, and ask trained annotators to draw polygons for the 2D image areas where contact takes place. We also annotate the involved body part of the human body. We use our HOT dataset to train a new contact detector, which takes a single color image as input, and outputs 2D contact heatmaps as well as the body-part labels that are in contact. This is a new and challenging task that extends current foot-ground or hand-object contact detectors to the full generality of the whole body. The detector uses a part-attention branch to guide contact estimation through the context of the surrounding body parts and scene. We evaluate our detector extensively, and quantitative results show that our model outperforms baselines, and that all components contribute to better performance. Results on images from an online repository show reasonable detections and generalizability.