English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs

MPS-Authors
/persons/resource/persons225792

Yu,  Ning
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)

arXiv:1909.03935.pdf
(Preprint), 7MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Chen, D., Yu, N., Zhang, Y., & Fritz, M. (2019). GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs. Retrieved from http://arxiv.org/abs/1909.03935.


Cite as: http://hdl.handle.net/21.11116/0000-0005-7489-E
Abstract
In recent years, the success of deep learning has carried over from discriminative models to generative models. In particular, generative adversarial networks (GANs) have facilitated a new level of performance ranging from media manipulation to dataset re-generation. Despite the success, the potential risks of privacy breach stemming from GANs are less well explored. In this paper, we focus on membership inference attack against GANs that has the potential to reveal information about victim models' training data. Specifically, we present the first taxonomy of membership inference attacks, which encompasses not only existing attacks but also our novel ones. We also propose the first generic attack model that can be instantiated in various settings according to adversary's knowledge about the victim model. We complement our systematic analysis of attack vectors with a comprehensive experimental study, that investigates the effectiveness of these attacks w.r.t. model type, training configurations, and attack type across three diverse application scenarios ranging from images, over medical data to location data. We show consistent effectiveness in all the setups, which bridges the assumption gap and performance gap in previous study with a complete spectrum of performance across settings. We conclusively remind users to think over before publicizing any part of their models.