Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making

Heidari, H., Ferrari, C., Gummadi, K. P., & Krause, A. (2018). Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making. Retrieved from http://arxiv.org/abs/1806.04959.

Item is

Basisdaten

einblenden: ausblenden:
Genre: Forschungspapier

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1806.04959.pdf (Preprint), 2MB
Name:
arXiv:1806.04959.pdf
Beschreibung:
File downloaded from arXiv at 2019-04-03 13:06
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Heidari, Hoda1, Autor
Ferrari, Claudio1, Autor
Gummadi, Krishna P.2, Autor           
Krause, Andreas1, Autor
Affiliations:
1External Organizations, ou_persistent22              
2Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society, ou_2105291              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Computer Science, Artificial Intelligence, cs.AI
 Zusammenfassung: We draw attention to an important, yet largely overlooked aspect of
evaluating fairness for automated decision making systems---namely risk and
welfare considerations. Our proposed family of measures corresponds to the
long-established formulations of cardinal social welfare in economics, and is
justified by the Rawlsian conception of fairness behind a veil of ignorance.
The convex formulation of our welfare-based measures of fairness allows us to
integrate them as a constraint into any convex loss minimization pipeline. Our
empirical analysis reveals interesting trade-offs between our proposal and (a)
prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of
individual fairness. Furthermore and perhaps most importantly, our work
provides both heuristic justification and empirical evidence suggesting that a
lower-bound on our measures often leads to bounded inequality in algorithmic
outcomes; hence presenting the first computationally feasible mechanism for
bounding individual-level inequality.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2018-06-132019-01-112018
 Publikationsstatus: Online veröffentlicht
 Seiten: 17 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1806.04959
URI: http://arxiv.org/abs/1806.04959
BibTex Citekey: Heidari_arXiv1806.04959
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle

einblenden: