English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Deep learning models to study sentence comprehension in the human brain

MPS-Authors
/persons/resource/persons227537

Arana,  Sophie
Donders Institute for Brain, Cognition and Behaviour, External Organizations;
University of Oxford;
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons69

Hagoort,  Peter
Donders Institute for Brain, Cognition and Behaviour, External Organizations;
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;

External Resource

link to preprint
(Supplementary material)

Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
Supplementary Material (public)
There is no public supplementary material available
Citation

Arana, S., Pesnot Lerousseau, J., & Hagoort, P. (2024). Deep learning models to study sentence comprehension in the human brain. Language, Cognition and Neuroscience, 39(8), 972-990. doi:10.1080/23273798.2023.2198245.


Cite as: https://hdl.handle.net/21.11116/0000-000C-A0EE-3
Abstract
Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding. As such, they could be interesting models of the integration of linguistic information in the human brain. We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension. Two main results emerge. First, the neural representation of word meaning aligns with the context-dependent, dense word vectors used by the artificial neural networks. Second, the processing hierarchy that emerges within artificial neural networks broadly matches the brain, but is surprisingly inconsistent across studies. We discuss current challenges in establishing artificial neural networks as process models of natural language comprehension. We suggest exploiting the highly structured representational geometry of artificial neural networks when mapping representations to brain data.