English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  The relational processing limits of classic and contemporary neural network models of language processing

Puebla, G., Martin, A. E., & Doumas, L. A. A. (2021). The relational processing limits of classic and contemporary neural network models of language processing. Language, Cognition and Neuroscience, 36(2), 240-254. doi:10.1080/23273798.2020.1821906.

Item is

Files

show Files
hide Files
:
Puebla_Martin_Doumas_2020suppl_Relational processing limits of classic and ...pdf (Supplementary material), 121KB
Name:
supplementary material
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-
:
Puebla_Martin_Doumas_2021_Relational processing limits of....pdf (Publisher version), 3MB
Name:
Puebla_Martin_Doumas_2021_Relational processing limits of....pdf
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Puebla, Guillermo1, 2, Author
Martin, Andrea E.3, 4, Author           
Doumas, Leonidas A. A.1, Author
Affiliations:
1University of Edinburgh, Edinburgh, UK, ou_persistent22              
2Universidad de Tarapacá, Arica, Chile, ou_persistent22              
3Language and Computation in Neural Systems, MPI for Psycholinguistics, Max Planck Society, ou_3217300              
4Donders Institute for Brain, Cognition and Behaviour, External Organizations, ou_55236              

Content

show
hide
Free keywords: -
 Abstract: Whether neural networks can capture relational knowledge is a matter of long-standing controversy. Recently, some researchers have argued that (1) classic connectionist models can handle relational structure and (2) the success of deep learning approaches to natural language processing suggests that structured representations are unnecessary to model human language. We tested the Story Gestalt model, a classic connectionist model of text comprehension, and a Sequence-to-Sequence with Attention model, a modern deep learning architecture for natural language processing. Both models were trained to answer questions about stories based on abstract thematic roles. Two simulations varied the statistical structure of new stories while keeping their relational structure intact. The performance of each model fell below chance at least under one manipulation. We argue that both models fail our tests because they can't perform dynamic binding. These results cast doubts on the suitability of traditional neural networks for explaining relational reasoning and language processing phenomena.

Details

show
hide
Language(s): eng - English
 Dates: 2020-09-212021
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1080/23273798.2020.1821906
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Language, Cognition and Neuroscience
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: London : Routledge
Pages: - Volume / Issue: 36 (2) Sequence Number: - Start / End Page: 240 - 254 Identifier: Other: ISSN
CoNE: https://pure.mpg.de/cone/journals/resource/2327-3798