hide
Free keywords:
-
Abstract:
Machine learning is transforming the world. Its application areas span privacy
sensitive and security critical tasks such as human identification and self-driving
cars. These applications raise privacy and security related questions that are not
fully understood or answered yet: Can automatic person recognisers identify people
in photos even when their faces are blurred? How easy is it to find an adversarial
input for a self-driving car that makes it drive off the road?
This thesis contributes one of the first steps towards a better understanding of
such concerns. We observe that many privacy and security critical scenarios for
learned models involve input data manipulation: users obfuscate their identity by
blurring their faces and adversaries inject imperceptible perturbations to the input
signal. We introduce a data manipulator framework as a tool for collectively describing
and analysing privacy and security relevant scenarios involving learned models.
A data manipulator introduces a shift in data distribution for achieving privacy or
security related goals, and feeds the transformed input to the target model. This
framework provides a common perspective on the studies presented in the thesis.
We begin the studies from the user’s privacy point of view. We analyse the
efficacy of common obfuscation methods like face blurring, and show that they
are surprisingly ineffective against state of the art person recognition systems. We
then propose alternatives based on head inpainting and adversarial examples. By
studying the user privacy, we also study the dual problem: model security. In model
security perspective, a model ought to be robust and reliable against small amounts
of data manipulation. In both cases, data are manipulated with the goal of changing
the target model prediction. User privacy and model security problems can be
described with the same objective.
We then study the knowledge aspect of the data manipulation problem. The more
one knows about the target model, the more effective manipulations one can craft.
We propose a game theoretic manipulation framework to systematically represent
the knowledge level on the target model and derive privacy and security guarantees.
We then discuss ways to increase knowledge about a black-box model by only querying
it, deriving implications that are relevant to both privacy and security perspectives.