Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks

Orekondy, T., Schiele, B., & Fritz, M. (2019). Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks. Retrieved from http://arxiv.org/abs/1906.10908.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1906.10908.pdf (Preprint), 6MB
 
Datei-Permalink:
-
Name:
arXiv:1906.10908.pdf
Beschreibung:
File downloaded from arXiv at 2019-07-03 11:46
OA-Status:
Sichtbarkeit:
Privat
MIME-Typ / Prüfsumme:
application/pdf
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Orekondy, Tribhuvanesh1, Autor           
Schiele, Bernt1, Autor           
Fritz, Mario2, Autor           
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Learning, cs.LG,Computer Science, Cryptography and Security, cs.CR,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Statistics, Machine Learning, stat.ML
 Zusammenfassung: With the advances of ML models in recent years, we are seeing an increasing
number of real-world commercial applications and services e.g., autonomous
vehicles, medical equipment, web APIs emerge. Recent advances in model
functionality stealing attacks via black-box access (i.e., inputs in,
predictions out) threaten the business model of such ML applications, which
require a lot of time, money, and effort to develop. In this paper, we address
the issue by studying defenses for model stealing attacks, largely motivated by
a lack of effective defenses in literature. We work towards the first defense
which introduces targeted perturbations to the model predictions under a
utility constraint. Our approach introduces the perturbations targeted towards
manipulating the training procedure of the attacker. We evaluate our approach
on multiple datasets and attack scenarios across a range of utility constrains.
Our results show that it is indeed possible to trade-off utility (e.g.,
deviation from original prediction, test accuracy) to significantly reduce
effectiveness of model stealing attacks.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2019-06-262019
 Publikationsstatus: Online veröffentlicht
 Seiten: 13 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1906.10908
URI: http://arxiv.org/abs/1906.10908
BibTex Citekey: Orekondy_arXiv1906.10908
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: