Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT
  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2016). Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. Fairness, Accountability, and Transparency in Machine Learning. doi:10.1145/3038912.3052660.

Item is

Dateien

einblenden: Dateien
ausblenden: Dateien
:
arXiv:1610.08452.pdf (Preprint), 644KB
Name:
arXiv:1610.08452.pdf
Beschreibung:
File downloaded from arXiv at 2017-04-12 12:25 to appear in Proceedings of the 26th International World Wide Web Conference (WWW), 2017. Code available at: https://github.com/mbilalzafar/fair-classification
OA-Status:
Sichtbarkeit:
Öffentlich
MIME-Typ / Prüfsumme:
application/pdf / [MD5]
Technische Metadaten:
Copyright Datum:
-
Copyright Info:
-

Externe Referenzen

einblenden:

Urheber

einblenden:
ausblenden:
 Urheber:
Zafar, Muhammad Bilal1, Autor           
Valera, Isabel2, Autor           
Gomez Rodriguez, Manuel2, Autor           
Gummadi, Krishna P.1, Autor           
Affiliations:
1Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society, ou_2105291              
2Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society, ou_2105290              

Inhalt

einblenden:
ausblenden:
Schlagwörter: Statistics, Machine Learning, stat.ML,Computer Science, Learning, cs.LG
 Zusammenfassung: Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.

Details

einblenden:
ausblenden:
Sprache(n): eng - English
 Datum: 2016-10-262017-03-082016
 Publikationsstatus: Online veröffentlicht
 Seiten: 10 p.
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: arXiv: 1610.08452
DOI: 10.1145/3038912.3052660
URI: http://arxiv.org/abs/1610.08452
BibTex Citekey: ZafarFATML2016
 Art des Abschluß: -

Veranstaltung

einblenden:

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: Fairness, Accountability, and Transparency in Machine Learning
  Kurztitel : FAT ML 2016
  Andere : FAT/ML 2016
Genre der Quelle: Konferenzband
 Urheber:
Affiliations:
Ort, Verlag, Ausgabe: -
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: - Identifikator: -