日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細

  GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs

Chen, D., Yu, N., Zhang, Y., & Fritz, M. (2019). GAN-Leaks: A Taxonomy of Membership Inference Attacks against GANs. Retrieved from http://arxiv.org/abs/1909.03935.

Item is

基本情報

表示: 非表示:
アイテムのパーマリンク: https://hdl.handle.net/21.11116/0000-0005-7489-E 版のパーマリンク: https://hdl.handle.net/21.11116/0000-0005-748A-D
資料種別: 成果報告書

ファイル

表示: ファイル
非表示: ファイル
:
arXiv:1909.03935.pdf (プレプリント), 7MB
 
ファイルのパーマリンク:
-
ファイル名:
arXiv:1909.03935.pdf
説明:
File downloaded from arXiv at 2020-01-10 08:47
OA-Status:
閲覧制限:
非公開
MIMEタイプ / チェックサム:
application/pdf
技術的なメタデータ:
著作権日付:
-
著作権情報:
-

関連URL

表示:

作成者

表示:
非表示:
 作成者:
Chen, Dingfan1, 著者           
Yu, Ning2, 著者           
Zhang, Yang1, 著者
Fritz, Mario1, 著者           
所属:
1External Organizations, ou_persistent22              
2Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              

内容説明

表示:
非表示:
キーワード: Computer Science, Learning, cs.LG,Computer Science, Cryptography and Security, cs.CR,Computer Science, Computer Vision and Pattern Recognition, cs.CV
 要旨: In recent years, the success of deep learning has carried over from
discriminative models to generative models. In particular, generative
adversarial networks (GANs) have facilitated a new level of performance ranging
from media manipulation to dataset re-generation. Despite the success, the
potential risks of privacy breach stemming from GANs are less well explored. In
this paper, we focus on membership inference attack against GANs that has the
potential to reveal information about victim models' training data.
Specifically, we present the first taxonomy of membership inference attacks,
which encompasses not only existing attacks but also our novel ones. We also
propose the first generic attack model that can be instantiated in various
settings according to adversary's knowledge about the victim model. We
complement our systematic analysis of attack vectors with a comprehensive
experimental study, that investigates the effectiveness of these attacks w.r.t.
model type, training configurations, and attack type across three diverse
application scenarios ranging from images, over medical data to location data.
We show consistent effectiveness in all the setups, which bridges the
assumption gap and performance gap in previous study with a complete spectrum
of performance across settings. We conclusively remind users to think over
before publicizing any part of their models.

資料詳細

表示:
非表示:
言語: eng - English
 日付: 2019-09-092019
 出版の状態: オンラインで出版済み
 ページ: 22 p.
 出版情報: -
 目次: -
 査読: -
 識別子(DOI, ISBNなど): arXiv: 1909.03935
URI: http://arxiv.org/abs/1909.03935
BibTex参照ID: Chen_arXIv1909.03935
 学位: -

関連イベント

表示:

訴訟

表示:

Project information

表示:

出版物

表示: