Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model

MPG-Autoren
/persons/resource/persons283702

Johnson,  Erik Colin Manasie
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons101676

Habermann,  Marc
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons255947

Shimada,  Soshi
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons239654

Golyanik,  Vladislav
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

/persons/resource/persons45610

Theobalt,  Christian       
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Johnson, E. C. M., Habermann, M., Shimada, S., Golyanik, V., & Theobalt, C. (2023). Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 6598-6607). Piscataway, NJ: IEEE. doi:10.1109/CVPRW59228.2023.00701.


Zitierlink: https://hdl.handle.net/21.11116/0000-000B-9C7F-8
Zusammenfassung
Capturing general deforming scenes is crucial for many computer graphics and
vision applications, and it is especially challenging when only a monocular RGB
video of the scene is available. Competing methods assume dense point tracks,
3D templates, large-scale training datasets, or only capture small-scale
deformations. In contrast to those, our method, Ub4D, makes none of these
assumptions while outperforming the previous state of the art in challenging
scenarios. Our technique includes two new, in the context of non-rigid 3D
reconstruction, components, i.e., 1) A coordinate-based and implicit neural
representation for non-rigid scenes, which enables an unbiased reconstruction
of dynamic scenes, and 2) A novel dynamic scene flow loss, which enables the
reconstruction of larger deformations. Results on our new dataset, which will
be made publicly available, demonstrate the clear improvement over the state of
the art in terms of surface reconstruction accuracy and robustness to large
deformations. Visit the project page https://4dqv.mpi-inf.mpg.de/Ub4D/.