Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

MPG-Autoren
Es sind keine MPG-Autoren in der Publikation vorhanden
Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

arXiv:1806.01246.pdf
(Preprint), 706KB

ndss2019_03A-1_Salem_paper.pdf
(Verlagsversion), 581KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Salem, A., Zhang, Y., Humbert, M., Fritz, M., & Backes, M. (2019). ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Network and Distributed Systems Security Symposium 2019. Reston, VA: Internet Society. doi:10.14722/ndss.2019.23119.


Zitierlink: https://hdl.handle.net/21.11116/0000-0002-5B4C-4
Zusammenfassung
Machine learning (ML) has become a core component of many real-world
applications and training data is a key factor that drives current progress.
This huge success has led Internet companies to deploy machine learning as a
service (MLaaS). Recently, the first membership inference attack has shown that
extraction of information on the training set is possible in such MLaaS
settings, which has severe security and privacy implications.
However, the early demonstrations of the feasibility of such attacks have
many assumptions on the adversary such as using multiple so-called shadow
models, knowledge of the target model structure and having a dataset from the
same distribution as the target model's training data. We relax all 3 key
assumptions, thereby showing that such attacks are very broadly applicable at
low cost and thereby pose a more severe risk than previously thought. We
present the most comprehensive study so far on this emerging and developing
threat using eight diverse datasets which show the viability of the proposed
attacks across domains.
In addition, we propose the first effective defense mechanisms against such
broader class of membership inference attacks that maintain a high level of
utility of the ML model.