日本語
 
Help Privacy Policy ポリシー/免責事項
  詳細検索ブラウズ

アイテム詳細


公開

Preprint

Emergent Dominance Hierarchies in Reinforcement Learning Agents

MPS-Authors
/persons/resource/persons242761

Alon,  N       
Department of Computational Neuroscience, Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource

https://arxiv.org/pdf/2401.12258.pdf
(全文テキスト(全般))

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
フルテキスト (公開)
公開されているフルテキストはありません
付随資料 (公開)
There is no public supplementary material available
引用

Rachum, R., Nakar, Y., Tomlinson, B., Alon, N., & Mirsky, R. (submitted). Emergent Dominance Hierarchies in Reinforcement Learning Agents.


引用: https://hdl.handle.net/21.11116/0000-000F-0415-5
要旨
Modern Reinforcement Learning (RL) algorithms are able to outperform humans in a wide variety of tasks. Multi-agent reinforcement learning (MARL) settings present additional challenges, and successful cooperation in mixed-motive groups of agents depends on a delicate balancing act between individual and group objectives. Social conventions and norms, often inspired by human institutions, are used as tools for striking this balance.
In this paper, we examine a fundamental, well-studied social convention that underlies cooperation in both animal and human societies: dominance hierarchies.
We adapt the ethological theory of dominance hierarchies to artificial agents, borrowing the established terminology and definitions with as few amendments as possible. We demonstrate that populations of RL agents, operating without explicit programming or intrinsic rewards, can invent, learn, enforce, and transmit a dominance hierarchy to new populations. The dominance hierarchies that emerge have a similar structure to those studied in chickens, mice, fish, and other species.