English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Learning GAN fingerprints towards Image Attribution

MPS-Authors
/persons/resource/persons225792

Yu,  Ning
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)

1811.08180.pdf
(Preprint), 7MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Yu, N., Davis, L., & Fritz, M. (2019). Learning GAN fingerprints towards Image Attribution. Retrieved from http://arxiv.org/abs/1811.08180.


Cite as: http://hdl.handle.net/21.11116/0000-0002-95F8-E
Abstract
Recent advances in Generative Adversarial Networks (GANs) have shown increasing success in generating photorealistic images. But they also raise challenges to visual forensics and model authentication. We present the first study of learning GAN fingerprints towards image attribution: we systematically investigate the performance of classifying an image as real or GAN-generated. For GAN-generated images, we further identify their sources. Our experiments validate that GANs carry distinct model fingerprints and leave stable fingerprints to their generated images, which support image attribution. Even a single difference in GAN training initialization can result in different fingerprints, which enables fine-grained model authentication. We further validate such a fingerprint is omnipresent in different image components and is not biased by GAN artifacts. Fingerprint finetuning is effective in immunizing five types of adversarial image perturbations. Comparisons also show our learned fingerprints consistently outperform several baselines in a variety of setups.