English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Paper

Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making

MPS-Authors
/persons/resource/persons144524

Gummadi,  Krishna P.
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

arXiv:1806.04959.pdf
(Preprint), 2MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Heidari, H., Ferrari, C., Gummadi, K. P., & Krause, A. (2018). Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making. Retrieved from http://arxiv.org/abs/1806.04959.


Cite as: https://hdl.handle.net/21.11116/0000-0003-4E31-F
Abstract
We draw attention to an important, yet largely overlooked aspect of
evaluating fairness for automated decision making systems---namely risk and
welfare considerations. Our proposed family of measures corresponds to the
long-established formulations of cardinal social welfare in economics, and is
justified by the Rawlsian conception of fairness behind a veil of ignorance.
The convex formulation of our welfare-based measures of fairness allows us to
integrate them as a constraint into any convex loss minimization pipeline. Our
empirical analysis reveals interesting trade-offs between our proposal and (a)
prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of
individual fairness. Furthermore and perhaps most importantly, our work
provides both heuristic justification and empirical evidence suggesting that a
lower-bound on our measures often leads to bounded inequality in algorithmic
outcomes; hence presenting the first computationally feasible mechanism for
bounding individual-level inequality.