English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Artificial Intelligence Should Not Become a “Black Hole” for Human Agency in Tort Law

MPS-Authors
/persons/resource/persons205923

Kim,  Daria
MPI for Innovation and Competition, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

TLR_DK_AI_v2.pdf
(Publisher version), 194KB

Supplementary Material (public)
There is no public supplementary material available
Citation

Kim, D. (2023). Artificial Intelligence Should Not Become a “Black Hole” for Human Agency in Tort Law. The Tort Law Review, 29(2), 52-168.


Cite as: https://hdl.handle.net/21.11116/0000-000D-C9EC-7
Abstract
This article analyses the implications of the tendency to anthropomorphise artificial intelligence (AI) systems for tort law. It shows that the view of AI technology as “autonomous”, “unexplainable” and “unpredictable” can misguide the “fit-for-purpose” assessment of the existing liability regimes. The analysis points out that risks and harm associated with AI technology are not inflicted by AI systems as such but are mediated through AI applications, while the main challenge for the allocation of tortious liability lies in the highly distributed causation between human conduct and harm. Overall, it is argued that humans can and should retain agency over mitigating technological risks and internalising harmful effects, even when the sources of those risks and harms are highly distributed.