English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks

Orekondy, T., Schiele, B., & Fritz, M. (2019). Prediction Poisoning: Utility-Constrained Defenses Against Model Stealing Attacks. Retrieved from http://arxiv.org/abs/1906.10908.

Item is

Files

show Files
hide Files
:
arXiv:1906.10908.pdf (Preprint), 6MB
 
File Permalink:
-
Name:
arXiv:1906.10908.pdf
Description:
File downloaded from arXiv at 2019-07-03 11:46
OA-Status:
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Orekondy, Tribhuvanesh1, Author           
Schiele, Bernt1, Author           
Fritz, Mario2, Author           
Affiliations:
1Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society, ou_1116547              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Learning, cs.LG,Computer Science, Cryptography and Security, cs.CR,Computer Science, Computer Vision and Pattern Recognition, cs.CV,Statistics, Machine Learning, stat.ML
 Abstract: With the advances of ML models in recent years, we are seeing an increasing
number of real-world commercial applications and services e.g., autonomous
vehicles, medical equipment, web APIs emerge. Recent advances in model
functionality stealing attacks via black-box access (i.e., inputs in,
predictions out) threaten the business model of such ML applications, which
require a lot of time, money, and effort to develop. In this paper, we address
the issue by studying defenses for model stealing attacks, largely motivated by
a lack of effective defenses in literature. We work towards the first defense
which introduces targeted perturbations to the model predictions under a
utility constraint. Our approach introduces the perturbations targeted towards
manipulating the training procedure of the attacker. We evaluate our approach
on multiple datasets and attack scenarios across a range of utility constrains.
Our results show that it is indeed possible to trade-off utility (e.g.,
deviation from original prediction, test accuracy) to significantly reduce
effectiveness of model stealing attacks.

Details

show
hide
Language(s): eng - English
 Dates: 2019-06-262019
 Publication Status: Published online
 Pages: 13 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1906.10908
URI: http://arxiv.org/abs/1906.10908
BibTex Citekey: Orekondy_arXiv1906.10908
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show