Help Privacy Policy Disclaimer
  Advanced SearchBrowse





Large Language Models can Segment Narrative Events Similarly to Humans


Toneva,  Mariya
Group M. Toneva, Max Planck Institute for Software Systems, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

(Preprint), 2MB

Supplementary Material (public)
There is no public supplementary material available

Michelmann, S., Kumar, M., Norman, K. A., & Toneva, M. (2023). Large Language Models can Segment Narrative Events Similarly to Humans. Retrieved from https://arxiv.org/abs/2301.10297.

Cite as: https://hdl.handle.net/21.11116/0000-000D-6C44-E
Humans perceive discrete events such as "restaurant visits" and "train rides"
in their continuous experience. One important prerequisite for studying human
event perception is the ability of researchers to quantify when one event ends
and another begins. Typically, this information is derived by aggregating
behavioral annotations from several observers. Here we present an alternative
computational approach where event boundaries are derived using a large
language model, GPT-3, instead of using human annotations. We demonstrate that
GPT-3 can segment continuous narrative text into events. GPT-3-annotated events
are significantly correlated with human event annotations. Furthermore, these
GPT-derived annotations achieve a good approximation of the "consensus"
solution (obtained by averaging across human annotations); the boundaries
identified by GPT-3 are closer to the consensus, on average, than boundaries
identified by individual human annotators. This finding suggests that GPT-3
provides a feasible solution for automated event annotations, and it
demonstrates a further parallel between human cognition and prediction in large
language models. In the future, GPT-3 may thereby help to elucidate the
principles underlying human event perception.