English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  To "see" is to stereotype: Image tagging algorithms, gender recognition, and the accuracy-fairness trade-off

Barlas, P., Kyriakou, K., Guest, O., Kleanthous, S., & Otterbacher, J. (2021). To "see" is to stereotype: Image tagging algorithms, gender recognition, and the accuracy-fairness trade-off. Proceedings of the ACM on Human Computer Interaction, 4(CSCW3): 32. doi:10.1145/3432931.

Item is

Files

show Files
hide Files
:
Barlas_etal_2020_To see is to stereotype.pdf (Publisher version), 2MB
Name:
Barlas_etal_2020_To see is to stereotype.pdf
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
2020
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Barlas, P., Author
Kyriakou, K., Author
Guest, Olivia1, Author           
Kleanthous, S., Author
Otterbacher, J., Author
Affiliations:
1Research Centre on Interactive Media, Smart Systems & Emerging Technologies, Nicosia , Cyprus, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: Machine-learned computer vision algorithms for tagging images are increasingly used by developers and researchers, having become popularized as easy-to-use "cognitive services." Yet these tools struggle with gender recognition, particularly when processing images of women, people of color and non-binary individuals. Socio-technical researchers have cited data bias as a key problem; training datasets often over-represent images of people and contexts that convey social stereotypes. The social psychology literature explains that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can inadvertently learn to associate gender with scenes, occupations and activities. Thus, we study the extent to which image tagging algorithms mimic this phenomenon. We design a controlled experiment, to examine the interdependence between algorithmic recognition of context and the depicted person's gender. In the spirit of auditing to understand machine behaviors, we create a highly controlled dataset of people images, imposed on gender-stereotyped backgrounds. Our methodology is reproducible and our code publicly available. Evaluating five proprietary algorithms, we find that in three, gender inference is hindered when a background is introduced. Of the two that "see" both backgrounds and gender, it is the one whose output is most consistent with human stereotyping processes that is superior in recognizing gender. We discuss the accuracy--fairness trade-off, as well as the importance of auditing black boxes in better understanding this double-edged sword.

Details

show
hide
Language(s): eng - English
 Dates: 2021
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1145/3432931
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Proceedings of the ACM on Human Computer Interaction
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 4 (CSCW3) Sequence Number: 32 Start / End Page: - Identifier: -