English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making

Lahoti, P., Weikum, G., & Gummadi, K. P. (2018). iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making. Retrieved from http://arxiv.org/abs/1806.01059.

Item is

Basic

show hide
Genre: Paper
Latex : {iFair}: {L}earning Individually Fair Data Representations for Algorithmic Decision Making

Files

show Files
hide Files
:
arXiv:1806.01059.pdf (Preprint), 653KB
Name:
arXiv:1806.01059.pdf
Description:
File downloaded from arXiv at 2018-09-13 12:29
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-

Locators

show

Creators

show
hide
 Creators:
Lahoti, Preethi1, Author           
Weikum, Gerhard1, Author           
Gummadi, Krishna P.2, Author           
Affiliations:
1Databases and Information Systems, MPI for Informatics, Max Planck Society, ou_24018              
2External Organizations, ou_persistent22              

Content

show
hide
Free keywords: Computer Science, Learning, cs.LG,Computer Science, Information Retrieval, cs.IR,Statistics, Machine Learning, stat.ML
 Abstract: People are rated and ranked, towards algorithmic decision making in an increasing number of applications, typically based on machine learning. Research on how to incorporate fairness into such tasks has prevalently pursued the paradigm of group fairness: ensuring that each ethnic or social group receives its fair share in the outcome of classifiers and rankings. In contrast, the alternative paradigm of individual fairness has received relatively little attention. This paper introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models. Our notion of individual fairness requires that users who are similar in all task-relevant attributes such as job qualification, and disregarding all potentially discriminating attributes such as gender, should have similar outcomes. Since the case for fairness is ubiquitous across many tasks, we aim to learn general representations that can be applied to arbitrary downstream use-cases. We demonstrate the versatility of our method by applying it to classification and learning-to-rank tasks on two real-world datasets. Our experiments show substantial improvements over the best prior work for this setting.

Details

show
hide
Language(s): eng - English
 Dates: 2018-06-042018
 Publication Status: Published online
 Pages: 12 p.
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: arXiv: 1806.01059
URI: http://arxiv.org/abs/1806.01059
BibTex Citekey: Lahoti_arXiv1806.01059
 Degree: -

Event

show

Legal Case

show

Project information

show

Source

show