English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Playing repeated games with Large Language Models

MPS-Authors
/persons/resource/persons294073

Akata,  E       
Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons241804

Schulz,  L       
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons276874

Coda-Forno,  J
Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons139782

Schulz,  E
Research Group Computational Principles of Intelligence, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Akata, E., Schulz, L., Coda-Forno, J., Oh, S., Bethge, M., & Schulz, E. (2025). Playing repeated games with Large Language Models. Nature Human Behaviour, Epub ahead. doi:10.1038/s41562-025-02172-y.


Cite as: https://hdl.handle.net/21.11116/0000-000D-3A43-7
Abstract
Large language models (LLMs) are increasingly used in applications where they interact with humans and other agents. We propose to use behavioural game theory to study LLMs' cooperation and coordination behaviour. Here we let different LLMs play finitely repeated 2 × 2 games with each other, with human-like strategies, and actual human players. Our results show that LLMs perform particularly well at self-interested games such as the iterated Prisoner's Dilemma family. However, they behave suboptimally in games that require coordination, such as the Battle of the Sexes. We verify that these behavioural signatures are stable across robustness checks. We also show how GPT-4's behaviour can be modulated by providing additional information about its opponent and by using a 'social chain-of-thought' strategy. This also leads to better scores and more successful coordination when interacting with human players. These results enrich our understanding of LLMs' social behaviour and pave the way for a behavioural game theory for machines.