日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

成果報告書

Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes

MPS-Authors
/persons/resource/persons251918

Zhou,  Keyang
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons221909

Bhatnagar,  Bharat Lal
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt       
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons118756

Pons-Moll,  Gerard       
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

arXiv:2102.01161.pdf
(プレプリント), 9KB

付随資料 (公開)
There is no public supplementary material available
引用

Zhou, K., Bhatnagar, B. L., Schiele, B., & Pons-Moll, G. (2021). Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes. Retrieved from https://arxiv.org/abs/2102.01161.


引用: https://hdl.handle.net/21.11116/0000-0009-80FA-C
要旨
Most learning methods for 3D data (point clouds, meshes) suffer significant
performance drops when the data is not carefully aligned to a canonical
orientation. Aligning real world 3D data collected from different sources is
non-trivial and requires manual intervention. In this paper, we propose the
Adjoint Rigid Transform (ART) Network, a neural module which can be integrated
with a variety of 3D networks to significantly boost their performance. ART
learns to rotate input shapes to a learned canonical orientation, which is
crucial for a lot of tasks such as shape reconstruction, interpolation,
non-rigid registration, and latent disentanglement. ART achieves this with
self-supervision and a rotation equivariance constraint on predicted rotations.
The remarkable result is that with only self-supervision, ART facilitates
learning a unique canonical orientation for both rigid and nonrigid shapes,
which leads to a notable boost in performance of aforementioned tasks. We will
release our code and pre-trained models for further research.