English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment

Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2016). Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. Fairness, Accountability, and Transparency in Machine Learning. doi:10.1145/3038912.3052660.

Item is

Files

show Files
hide Files
:
arXiv:1610.08452.pdf (Preprint), 644KB
Name:
arXiv:1610.08452.pdf
Description:
File downloaded from arXiv at 2017-04-12 12:25 to appear in Proceedings of the 26th International World Wide Web Conference (WWW), 2017. Code available at: https://github.com/mbilalzafar/fair-classification
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Zafar, Muhammad Bilal1, Author           
Valera, Isabel2, Author           
Gomez Rodriguez, Manuel2, Author           
Gummadi, Krishna P.1, Author           
Affiliations:
1Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society, ou_2105291              
2Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society, ou_2105290              

Content

show
hide
Free keywords: Statistics, Machine Learning, stat.ML,Computer Science, Learning, cs.LG
 Abstract: Automated data-driven decision making systems are increasingly being used to assist, or even replace humans in many settings. These systems function by learning from historical decisions, often taken by humans. In order to maximize the utility of these systems (or, classifiers), their training involves minimizing the errors (or, misclassifications) over the given historical data. However, it is quite possible that the optimally trained classifier makes decisions for people belonging to different social groups with different misclassification rates (e.g., misclassification rates for females are higher than for males), thereby placing these groups at an unfair disadvantage. To account for and avoid such unfairness, in this paper, we introduce a new notion of unfairness, disparate mistreatment, which is defined in terms of misclassification rates. We then propose intuitive measures of disparate mistreatment for decision boundary-based classifiers, which can be easily incorporated into their formulation as convex-concave constraints. Experiments on synthetic as well as real world datasets show that our methodology is effective at avoiding disparate mistreatment, often at a small cost in terms of accuracy.

Details

show
hide
Language(s): eng - English
 Dates: 2016-10-262017-03-082016
 Publication Status: Published online
 Pages: 10 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1610.08452
DOI: 10.1145/3038912.3052660
URI: http://arxiv.org/abs/1610.08452
BibTex Citekey: ZafarFATML2016
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Fairness, Accountability, and Transparency in Machine Learning
  Abbreviation : FAT ML 2016
  Other : FAT/ML 2016
Source Genre: Proceedings
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: - Identifier: -