English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

TOCH: Spatio-Temporal Object Correspondence to Hand for Motion Refinement

MPS-Authors
/persons/resource/persons251918

Zhou,  Keyang
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons221909

Lal Bhatnagar,  Bharat
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons265839

Lenssen,  Jan Eric
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons118756

Pons-Moll,  Gerard
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2205.07982.pdf
(Preprint), 10MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Zhou, K., Lal Bhatnagar, B., Lenssen, J. E., & Pons-Moll, G. (2022). TOCH: Spatio-Temporal Object Correspondence to Hand for Motion Refinement. Retrieved from https://arxiv.org/abs/2205.07982.


Cite as: http://hdl.handle.net/21.11116/0000-000A-ACF3-2
Abstract
We present TOCH, a method for refining incorrect 3D hand-object interaction sequences using a data prior. Existing hand trackers, especially those that rely on very few cameras, often produce visually unrealistic results with hand-object intersection or missing contacts. Although correcting such errors requires reasoning about temporal aspects of interaction, most previous work focus on static grasps and contacts. The core of our method are TOCH fields, a novel spatio-temporal representation for modeling correspondences between hands and objects during interaction. The key component is a point-wise object-centric representation which encodes the hand position relative to the object. Leveraging this novel representation, we learn a latent manifold of plausible TOCH fields with a temporal denoising auto-encoder. Experiments demonstrate that TOCH outperforms state-of-the-art (SOTA) 3D hand-object interaction models, which are limited to static grasps and contacts. More importantly, our method produces smooth interactions even before and after contact. Using a single trained TOCH model, we quantitatively and qualitatively demonstrate its usefulness for 1) correcting erroneous reconstruction results from off-the-shelf RGB/RGB-D hand-object reconstruction methods, 2) de-noising, and 3) grasp transfer across objects. We will release our code and trained model on our project page at http://virtualhumans.mpi-inf.mpg.de/toch/