English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Instant Multi-View Head Capture through Learnable Registration

Bolkart, T., Li, T., & Black, M. J. (2023). Instant Multi-View Head Capture through Learnable Registration. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 768-779). New York, NY: IEEE. doi:10.1109/CVPR52729.2023.00081.

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files

Locators

show
hide
Description:
-
OA-Status:
Green
Locator:
https://doi.org/10.1109/CVPR52729.2023.00081 (Publisher version)
Description:
-
OA-Status:
Closed Access

Creators

show
hide
 Creators:
Bolkart, Timo1, Author           
Li, Tianye2, Author
Black, Michael J.1, Author                 
Affiliations:
1Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society, ou_1497642              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Abt. Black
 Abstract: Existing methods for capturing datasets of 3D heads in dense semantic correspondence are slow, and commonly address the problem in two separate steps; multi-view stereo (MVS) reconstruction followed by non-rigid registration. To simplify this process, we introduce TEMPEH (Towards Estimation of 3D Meshes from Performances of Expressive Heads) to directly infer 3D heads in dense correspondence from calibrated multi-view images. Registering datasets of 3D scans typically requires manual parameter tuning to find the right balance between accurately fitting the scans’ surfaces and being robust to scanning noise and outliers. Instead, we propose to jointly register a 3D head dataset while training TEMPEH. Specifically, during training we minimize a geometric loss commonly used for surface registration, effectively leveraging TEMPEH as a regularizer. Our multi-view head inference builds on a volumetric feature representation that samples and fuses features from each view using camera calibration information. To account for partial occlusions and a large capture volume that enables head movements, we use view- and surface-aware feature fusion, and a spatial transformer-based head localization module, respectively. We use raw MVS scans as supervision during training, but, once trained, TEMPEH directly predicts 3D heads in dense correspondence without requiring scans. Predicting one head takes about 0.3 seconds with a median reconstruction error of 0.26 mm, 64 lower than the current state-of-the-art. This enables the efficient capture of large datasets containing multiple people and diverse facial motions. Code, model, and data are publicly available at https://tempeh.is.tue.mpg.de.

Details

show
hide
Language(s): eng - English
 Dates: 2023-082023-08
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: TEMPEH
DOI: 10.1109/CVPR52729.2023.00081
arXiv: 2306.07437
 Degree: -

Event

show
hide
Title: Conference on Computer Vision and Pattern Recognition (CVPR 2023)
Place of Event: Vancouver
Start-/End Date: 2023-06-17 - 2023-06-24

Legal Case

show

Project information

show

Source 1

show
hide
Title: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: New York, NY : IEEE
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 768 - 779 Identifier: ISBN: 979-8-3503-0129-8
ISBN: 979-8-3503-0130-4
DOI: 10.1109/CVPR52729.2023

Source 2

show
hide
Title: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Source Genre: Series
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: ISSN: 2575-7075
ISSN: 1063-6919