English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Shared computational principles for language processing in humans and deep language models

Goldstein, A., Zada, Z., Buchnik, E., Schain, M., Price, A., Aubrey, B., et al. (2022). Shared computational principles for language processing in humans and deep language models. Nature Neuroscience, 25, 369-380. doi:10.1038/s41593-022-01026-4.

Item is

Files

show Files
hide Files
:
neu-22-mel-02-shared.pdf (Publisher version), 6MB
Name:
neu-22-mel-02-shared.pdf
Description:
OA
OA-Status:
Hybrid
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
2022
Copyright Info:
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Locators

show

Creators

show
hide
 Creators:
Goldstein, Ariel1, 2, Author
Zada, Zaid2, Author
Buchnik, Eliav2, Author
Schain, Mariano2, Author
Price, Amy1, Author
Aubrey, Bobbi1, 3, Author
Nastase, Samuel A.1, Author
Feder, Amir2, Author
Emanuel, Dotan2, Author
Cohen, Alon2, Author
Jansen, Aren2, Author
Gazula, Harshvardhan1, Author
Choe, Gina1, 3, Author
Rao, Aditi1, 3, Author
Kim, Catherine1, 3, Author
Casto, Colton1, Author
Fanda, Lora3, Author
Doyle, Werner3, Author
Friedman, Daniel3, Author
Dugan, Patricia3, Author
Melloni, Lucia4, 5, Author           Reichart, Roi6, AuthorDevore, Sasha3, AuthorFlinker, Adeen3, AuthorHasenfratz, Liat1, AuthorLevy, Omer7, AuthorHassidim, Avinatan2, AuthorBrenner, Michael2, 8, AuthorMatias, Yossi2, AuthorNorman, Kenneth A.1, AuthorDevinsky, Orrin3, AuthorHasson, Uri1, 3, Author more..
Affiliations:
1Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, NJ, USA, ou_persistent22              
2Google Research, Mountain View, CA, USA, ou_persistent22              
3New York University Grossman School of Medicine, New York, NY, US, ou_persistent22              
4Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Max Planck Society, ou_2421697              
5Research Group Neural Circuits, Consciousness, and Cognition, Max Planck Institute for Empirical Aesthetics, Max Planck Society, Grüneburgweg 14, 60322 Frankfurt am Main, DE, ou_3371719              
6Faculty of Industrial Engineering and Management, Technion, Israel Institute of Technology, Haifa, Israel, ou_persistent22              
7Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel, ou_persistent22              
8School of Engineering and Applied Science, Harvard University, Cambridge, MA, USA, ou_persistent22              

Content

show
hide
Free keywords: Electrophysiology, Language, Neural decoding, Neural encoding
 Abstract: Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.

Details

show
hide
Language(s): eng - English
 Dates: 2021-01-312022-01-272022-03-072022-03
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1038/s41593-022-01026-4
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Nature Neuroscience
  Other : Nat. Neurosci.
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: New York, NY : Nature America Inc.
Pages: - Volume / Issue: 25 Sequence Number: - Start / End Page: 369 - 380 Identifier: ISSN: 1097-6256
CoNE: https://pure.mpg.de/cone/journals/resource/954925610931