日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

成果報告書

GazeDirector: Fully Articulated Eye Gaze Redirection in Video

MPS-Authors
/persons/resource/persons86799

Bulling,  Andreas
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
There are no locators available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)

arXiv:1704.08763.pdf
(プレプリント), 5MB

付随資料 (公開)
There is no public supplementary material available
引用

Wood, E., Baltrusaitis, T., Morency, L.-P., Robinson, P., & Bulling, A. (2017). GazeDirector: Fully Articulated Eye Gaze Redirection in Video. Retrieved from http://arxiv.org/abs/1704.08763.


引用: https://hdl.handle.net/11858/00-001M-0000-002D-8B59-2
要旨
We present GazeDirector, a new approach for eye gaze redirection that uses model-fitting. Our method first tracks the eyes by fitting a multi-part eye region model to video frames using analysis-by-synthesis, thereby recovering eye region shape, texture, pose, and gaze simultaneously. It then redirects gaze by 1) warping the eyelids from the original image using a model-derived flow field, and 2) rendering and compositing synthesized 3D eyeballs onto the output image in a photorealistic manner. GazeDirector allows us to change where people are looking without person-specific training data, and with full articulation, i.e. we can precisely specify new gaze directions in 3D. Quantitatively, we evaluate both model-fitting and gaze synthesis, with experiments for gaze estimation and redirection on the Columbia gaze dataset. Qualitatively, we compare GazeDirector against recent work on gaze redirection, showing better results especially for large redirection angles. Finally, we demonstrate gaze redirection on YouTube videos by introducing new 3D gaze targets and by manipulating visual behavior.