English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations

MPS-Authors
/persons/resource/persons226650

Tretschk,  Edgar
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons206546

Tewari,  Ayush
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons239654

Golyanik,  Vladislav
Computer Graphics, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:2008.01639.pdf
(Preprint), 9MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Stoll, C., & Theobalt, C. (2020). PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations. Retrieved from https://arxiv.org/abs/2008.01639.


Cite as: https://hdl.handle.net/21.11116/0000-0007-E8ED-9
Abstract
Implicit surface representations, such as signed-distance functions, combined
with deep learning have led to impressive models which can represent detailed
shapes of objects with arbitrary topology. Since a continuous function is
learned, the reconstructions can also be extracted at any arbitrary resolution.
However, large datasets such as ShapeNet are required to train such models. In
this paper, we present a new mid-level patch-based surface representation. At
the level of patches, objects across different categories share similarities,
which leads to more generalizable models. We then introduce a novel method to
learn this patch-based representation in a canonical space, such that it is as
object-agnostic as possible. We show that our representation trained on one
category of objects from ShapeNet can also well represent detailed shapes from
any other category. In addition, it can be trained using much fewer shapes,
compared to existing approaches. We show several applications of our new
representation, including shape interpolation and partial point cloud
completion. Due to explicit control over positions, orientations and scales of
patches, our representation is also more controllable compared to object-level
representations, which enables us to deform encoded shapes non-rigidly.