非表示:
キーワード:
Computer Science, Learning, cs.LG,Computer Science, Cryptography and Security, cs.CR,Computer Science, Computer Vision and Pattern Recognition, cs.CV
要旨:
In recent years, the success of deep learning has carried over from
discriminative models to generative models. In particular, generative
adversarial networks (GANs) have facilitated a new level of performance ranging
from media manipulation to dataset re-generation. Despite the success, the
potential risks of privacy breach stemming from GANs are less well explored. In
this paper, we focus on membership inference attack against GANs that has the
potential to reveal information about victim models' training data.
Specifically, we present the first taxonomy of membership inference attacks,
which encompasses not only existing attacks but also our novel ones. We also
propose the first generic attack model that can be instantiated in various
settings according to adversary's knowledge about the victim model. We
complement our systematic analysis of attack vectors with a comprehensive
experimental study, that investigates the effectiveness of these attacks w.r.t.
model type, training configurations, and attack type across three diverse
application scenarios ranging from images, over medical data to location data.
We show consistent effectiveness in all the setups, which bridges the
assumption gap and performance gap in previous study with a complete spectrum
of performance across settings. We conclusively remind users to think over
before publicizing any part of their models.