English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Deep neural networks and humans both benefit from compositional language structure

MPS-Authors
/persons/resource/persons268437

Galke,  Lukas
Language and Genetics Department, MPI for Psycholinguistics, Max Planck Society;
Language Evolution and Adaptation in Diverse Situations (LEADS), MPI for Psycholinguistics, Max Planck Society;
Department of Mathematics and Computer Science, University of Southern Denmark ;

/persons/resource/persons189053

Raviv,  Limor
Language and Genetics Department, MPI for Psycholinguistics, Max Planck Society;
Language Evolution and Adaptation in Diverse Situations (LEADS), MPI for Psycholinguistics, Max Planck Society;
SCAN, University of Glasgow;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

Galke-Ram-Raviv-Deep-Neural-2024.pdf
(Publisher version), 4MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Galke, L., Ram, Y., & Raviv, L. (2024). Deep neural networks and humans both benefit from compositional language structure. Nature Communications, 15: 10816. doi:10.1038/s41467-024-55158-1.


Cite as: https://hdl.handle.net/21.11116/0000-0010-5FB7-6
Abstract
Deep neural networks drive the success of natural language processing. A fundamental property of language is its compositional structure, allowing humans to systematically produce forms for new meanings. For humans, languages with more compositional and transparent structures are typically easier to learn than those with opaque and irregular structures. However, this learnability advantage has not yet been shown for deep neural networks, limiting their use as models for human language learning. Here, we directly test how neural networks compare to humans in learning and generalizing different languages that vary in their degree of compositional structure. We evaluate the memorization and generalization capabilities of a large language model and recurrent neural networks, and show that both deep neural networks exhibit a learnability advantage for more structured linguistic input: neural networks exposed to more compositional languages show more systematic generalization, greater agreement between different agents, and greater similarity to human learners.