English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Learning to Dress 3D People in Generative Clothing

MPS-Authors
/persons/resource/persons118756

Pons-Moll,  Gerard
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Ma, Q., Yang, J., Ranjan, A., Pujades, S., Pons-Moll, G., Tang, S., et al. (2019). Learning to Dress 3D People in Generative Clothing. Retrieved from http://arxiv.org/abs/1907.13615.


Cite as: https://hdl.handle.net/21.11116/0000-0005-749D-8
Abstract
Three-dimensional human body models are widely used in the analysis of human
pose and motion. Existing models, however, are learned from minimally-clothed
3D scans and thus do not generalize to the complexity of dressed people in
common images and videos. Additionally, current models lack the expressive
power needed to represent the complex non-linear geometry of pose-dependent
clothing shape. To address this, we learn a generative 3D mesh model of clothed
people from 3D scans with varying pose and clothing. Specifically, we train a
conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body
model, making clothing an additional term on SMPL. Our model is conditioned on
both pose and clothing type, giving the ability to draw samples of clothing to
dress different body shapes in a variety of styles and poses. To preserve
wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes.
Our model, named CAPE, represents global shape and fine local structure,
effectively extending the SMPL body model to clothing. To our knowledge, this
is the first generative model that directly dresses 3D human body meshes and
generalizes to different poses.