Help Privacy Policy Disclaimer
  Advanced SearchBrowse





Fast Gravitational Approach for Rigid Point Set Registration with Ordinary Differential Equations


Theobalt,  Christian       
Computer Graphics, MPI for Informatics, Max Planck Society;


Golyanik,  Vladislav
Computer Graphics, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

(Preprint), 10MB

Supplementary Material (public)
There is no public supplementary material available

Ali, S. A., Kahraman, K., Theobalt, C., Stricker, D., & Golyanik, V. (2020). Fast Gravitational Approach for Rigid Point Set Registration with Ordinary Differential Equations. Retrieved from https://arxiv.org/abs/2009.14005.

Cite as: https://hdl.handle.net/21.11116/0000-0007-E8FA-A
This article introduces a new physics-based method for rigid point set
alignment called Fast Gravitational Approach (FGA). In FGA, the source and
target point sets are interpreted as rigid particle swarms with masses
interacting in a globally multiply-linked manner while moving in a simulated
gravitational force field. The optimal alignment is obtained by explicit
modeling of forces acting on the particles as well as their velocities and
displacements with second-order ordinary differential equations of motion.
Additional alignment cues (point-based or geometric features, and other
boundary conditions) can be integrated into FGA through particle masses. We
propose a smooth-particle mass function for point mass initialization, which
improves robustness to noise and structural discontinuities. To avoid
prohibitive quadratic complexity of all-to-all point interactions, we adapt a
Barnes-Hut tree for accelerated force computation and achieve quasilinear
computational complexity. We show that the new method class has characteristics
not found in previous alignment methods such as efficient handling of partial
overlaps, inhomogeneous point sampling densities, and coping with large point
clouds with reduced runtime compared to the state of the art. Experiments show
that our method performs on par with or outperforms all compared competing
non-deep-learning-based and general-purpose techniques (which do not assume the
availability of training data and a scene prior) in resolving transformations
for LiDAR data and gains state-of-the-art accuracy and speed when coping with
different types of data disturbances.