Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Journal Article

A reputation game simulation: Emergent social phenomena from information theory


Enßlin,  Torsten
Computational Structure Formation, MPI for Astrophysics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Enßlin, T., Kainz, V., & Boehm, a. C. (2022). A reputation game simulation: Emergent social phenomena from information theory. Annalen der Physik, 534(5): 2100277. doi:10.1002/andp.202100277.

Cite as: https://hdl.handle.net/21.11116/0000-000B-51F3-7
Reputation is a central element of social communications, be it with human orartificial intelligence (AI), and as such can be the primary target of maliciouscommunication strategies. There is already a vast amount of literature ontrust networks and their dynamics using Bayesian principles and involvingTheory of Mind models. An issue for these simulations can be the amount ofinformation that can be stored and managed using discretizing variables andhard thresholds. Here a novel approach to the way information is updatedthat accounts for knowledge uncertainty and is closer to reality is proposed.Agents use information compression techniques to capture their complexenvironment and store it in their finite memories. The loss of information thatresults from this leads to emergent phenomena, such as echo chambers,self-deception, deception symbiosis, and freezing of group opinions. Variousmalicious strategies of agents are studied for their impact on group sociology,like sycophancy, egocentricity, pathological lying, and aggressiveness. Ourset-up already provides insights into social interactions and can be used toinvestigate the effects of various communication strategies and find ways tocounteract malicious ones. Eventually this work should help to safeguard thedesign of non-abusive AI systems.