English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations

Tretschk, E., Tewari, A., Golyanik, V., Zollhöfer, M., Stoll, C., & Theobalt, C. (2020). PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations. Retrieved from https://arxiv.org/abs/2008.01639.

Item is

Basic

show hide
Genre: Paper
Latex : {PatchNets}: {P}atch-Based Generalizable Deep Implicit {3D} Shape Representations

Files

show Files
hide Files
:
arXiv:2008.01639.pdf (Preprint), 9MB
Name:
arXiv:2008.01639.pdf
Description:
File downloaded from arXiv at 2021-02-08 11:52
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Tretschk, Edgar1, Author           
Tewari, Ayush1, Author           
Golyanik, Vladislav1, Author           
Zollhöfer, Michael2, Author           
Stoll, Carsten2, Author           
Theobalt, Christian1, Author           
Affiliations:
1Computer Graphics, MPI for Informatics, Max Planck Society, ou_40047              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Graphics, cs.GR
 Abstract: Implicit surface representations, such as signed-distance functions, combined
with deep learning have led to impressive models which can represent detailed
shapes of objects with arbitrary topology. Since a continuous function is
learned, the reconstructions can also be extracted at any arbitrary resolution.
However, large datasets such as ShapeNet are required to train such models. In
this paper, we present a new mid-level patch-based surface representation. At
the level of patches, objects across different categories share similarities,
which leads to more generalizable models. We then introduce a novel method to
learn this patch-based representation in a canonical space, such that it is as
object-agnostic as possible. We show that our representation trained on one
category of objects from ShapeNet can also well represent detailed shapes from
any other category. In addition, it can be trained using much fewer shapes,
compared to existing approaches. We show several applications of our new
representation, including shape interpolation and partial point cloud
completion. Due to explicit control over positions, orientations and scales of
patches, our representation is also more controllable compared to object-level
representations, which enables us to deform encoded shapes non-rigidly.

Details

show
hide
Language(s): eng - English
 Dates: 2020-08-042021-02-052020
 Publication Status: Published online
 Pages: 25 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 2008.01639
BibTex Citekey: Tretschk_2008.01639
URI: https://arxiv.org/abs/2008.01639
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show