English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  From Parity to Preference-based Notions of Fairness in Classification

Zafar, M. B., Valera, I., Gomez Rodriguez, M., Gummadi, K., & Weller, A. (2017). From Parity to Preference-based Notions of Fairness in Classification. Retrieved from http://arxiv.org/abs/1707.00010.

Item is

Files

show Files
hide Files
:
arXiv:1707.00010.pdf (Preprint), 2MB
Name:
arXiv:1707.00010.pdf
Description:
File downloaded from arXiv at 2018-03-16 12:26 To appear in Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017). Code available at: https://github.com/mbilalzafar/fair-classification
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Zafar, Muhammad Bilal1, Author           
Valera, Isabel2, Author
Gomez Rodriguez, Manuel2, Author           
Gummadi, Krishna1, Author           
Weller, Adrian2, Author
Affiliations:
1Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society, ou_2105291              
2Group M. Gomez Rodriguez, Max Planck Institute for Software Systems, Max Planck Society, ou_2105290              

Content

show
hide
Free keywords: Statistics, Machine Learning, stat.ML,Computer Science, Learning, cs.LG
 Abstract: The adoption of automated, data-driven decision making in an ever expanding range of applications has raised concerns about its potential unfairness towards certain social groups. In this context, a number of recent studies have focused on defining, detecting, and removing unfairness from data-driven decision systems. However, the existing notions of fairness, based on parity (equality) in treatment or outcomes for different social groups, tend to be quite stringent, limiting the overall decision making accuracy. In this paper, we draw inspiration from the fair-division and envy-freeness literature in economics and game theory and propose preference-based notions of fairness -- given the choice between various sets of decision treatments or outcomes, any group of users would collectively prefer its treatment or outcomes, regardless of the (dis)parity as compared to the other groups. Then, we introduce tractable proxies to design margin-based classifiers that satisfy these preference-based notions of fairness. Finally, we experiment with a variety of synthetic and real-world datasets and show that preference-based fairness allows for greater decision accuracy than parity-based fairness.

Details

show
hide
Language(s): eng - English
 Dates: 2017-06-302017-11-282017
 Publication Status: Published online
 Pages: 14 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1707.00010
URI: http://arxiv.org/abs/1707.00010
BibTex Citekey: Zafar2017
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show