Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Forschungspapier

Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making

MPG-Autoren
/persons/resource/persons226995

Chugunova,  Marina
MPI for Innovation and Competition, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Sele, D., & Chugunova, M. (2022). Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making. Max Planck Institute for Innovation & Competition Research Paper, No. 22-20.


Zitierlink: https://hdl.handle.net/21.11116/0000-000B-9EF7-D
Zusammenfassung
Are people algorithm averse, as some previous literature indicates? If so, can the retention of human oversight increase the uptake of algorithmic recommendations, and does keeping a human in the loop improve accuracy? Answers to these questions are of utmost importance given the fast-growing availability of algorithmic recommendations and current intense discussions about regulation of automated decision-making. In an online experiment, we find that 66% of participants prefer algorithmic to equally accurate human recommendations if the decision is delegated fully. This preference for algorithms increases by further 7 percentage points if participants are able to monitor and adjust the recommendations before the decision is made. In line with automation bias, participants adjust the recommendations that stem from an algorithm by less than those from another human. Importantly, participants are less likely to intervene with the least accurate recommendations and adjust them by less, raising concerns about the monitoring ability of a human in a Human-in-the-Loop system. Our results document a trade-off: while allowing people to adjust algorithmic recommendations increases their uptake, the adjustments made by the human monitors reduce the quality of final decisions.