English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation

MPS-Authors
/persons/resource/persons180812

He,  Yang
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

/persons/resource/persons45383

Schiele,  Bernt
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)

arXiv:1912.09685.pdf
(Preprint), 9MB

Supplementary Material (public)
There is no public supplementary material available
Citation

He, Y., Rahimian, S., Schiele, B., & Fritz, M. (2019). Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation. Retrieved from http://arxiv.org/abs/1912.09685.


Cite as: http://hdl.handle.net/21.11116/0000-0005-73E3-9
Abstract
Today's success of state of the art methods for semantic segmentation is driven by large datasets. Data is considered an important asset that needs to be protected, as the collection and annotation of such datasets comes at significant efforts and associated costs. In addition, visual data might contain private or sensitive information, that makes it equally unsuited for public release. Unfortunately, recent work on membership inference in the broader area of adversarial machine learning and inference attacks on machine learning models has shown that even black box classifiers leak information on the dataset that they were trained on. We present the first attacks and defenses for complex, state of the art models for semantic segmentation. In order to mitigate the associated risks, we also study a series of defenses against such membership inference attacks and find effective counter measures against the existing risks. Finally, we extensively evaluate our attacks and defenses on a range of relevant real-world datasets: Cityscapes, BDD100K, and Mapillary Vistas.