English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Linear object classes and image synthesis from a single example image

MPS-Authors
/persons/resource/persons84280

Vetter,  T
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Vetter, T., & Poggio, T. (1997). Linear object classes and image synthesis from a single example image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 733-742. doi:10.1109/34.598230.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-EA04-B
Abstract
The need to generate new views of a 3D object from a single real image arises in several fields, including graphics and object recognition. While the traditional approach relies on the use of 3D models, we have recently introduced [1], [2], [3] simpler techniques that are applicable under restricted conditions. The approach exploits image transformations that are specific to the relevant object class, and learnable from example Views of other "prototypical" objects df the same class. In this paper, we introduce such a technique by extending the notion of linear class proposed by Poggio and Vetter. For linear object classes, it is shown that linear transformations can be learned exactly from a basis set of 2D prototypical views. We demonstrate the approach on artificial objects and then show preliminary evidence that the technique can effectively "rotate" high-resolution face images from a single 2D view.