English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Asking GPT for the ordinary meaning of statutory terms

MPS-Authors
/persons/resource/persons183106

Engel,  Christoph
Max Planck Institute for Research on Collective Goods, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

McAdams Engel_Final.pdf
(Publisher version), 742KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Engel, C., & McAdams, R. H. (2024). Asking GPT for the ordinary meaning of statutory terms. Journal of Law, Technology & Policy, (2), 235-296.


Cite as: https://hdl.handle.net/21.11116/0000-000F-F8BD-5
Abstract
We report on our test of the Large Language Model (LLM) ChatGPT (GPT) as a tool for generating evidence of the ordinary meaning of statutory terms. We explain why the most useful evidence for interpretation involves a distribution of replies rather than only what GPT regards as the single “best” reply. That motivates our decision to use Chat 3.5 Turbo instead of Chat 4 and to run each prompt we use 100 times. Asking GPT whether the stat-utory term “vehicle” includes a list of candidate objects (e.g., bus, bicycle, skateboard) al-lows us to test it against a benchmark, the results of a high-quality experimental survey (Tobia 2000) that asked over 2,800 English speakers the same questions. After learning what prompts fail and which one works best (a belief prompt combined with a Likert scale reply), we use the successful prompt to test the effects of “informing” GPT that the term appears in a particular rule (one of five possible) or that the legal rule using the term has a particular purpose (one of six possible). Finally, we explore GPT’s sensitivity to meaning at a particular moment in the past (the 1950s) and its ability to distinguish extensional from intensional meaning. To our knowledge, these are the first tests of GPT as a tool for gen-erating empirical data on the ordinary meaning of statutory terms. Legal actors have good reason to be cautious, but LLMs have the potential to radically facilitate and improve legal tasks, including the interpretation of statutes.