English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Poster

Learning hierarchical centre-embedding structures: Influence of distributional properties of the Input

MPS-Authors
/persons/resource/persons277708

Chen,  Yao
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;

/persons/resource/persons276630

Ferrari,  Ambra
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

/persons/resource/persons69

Hagoort,  Peter
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Donders Institute for Brain, Cognition and Behaviour, External Organizations;

/persons/resource/persons37966

Poletiek,  Fenna H.
Neurobiology of Language Department, MPI for Psycholinguistics, Max Planck Society;
Leiden University;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Chen, Y., Ferrari, A., Hagoort, P., Bocanegra, B., & Poletiek, F. H. (2023). Learning hierarchical centre-embedding structures: Influence of distributional properties of the Input. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.


Cite as: https://hdl.handle.net/21.11116/0000-000F-1674-6
Abstract
Nearly all human languages have grammars with complex recursive structures. These structures pose notable learning challenges. Two distributional properties of the input may facilitate learning: the presence of semantic biases (e.g. p(barks|dog) > p(talks|dog)) and the Zipf-distribution, with short sentences being extremely more frequent than longer ones. This project tested the effect of these sources of information on statistical learning of a hierarchical center-embedding grammar, using an artificial grammar learning paradigm. Semantic biases were represented by variations in transitional probabilities between words, with a biased input (p(barks|dog) > p(talks|dog)) compared to a non-biased input (p(barks|dog) = p(talks|dog)). The Zipf distribution was compared to a flat distribution, with sentences of different lengths occurring equally often. In a 2×2 factorial design, we tested for effects of biased transitional probabilities (biased/non-biased) and the distribution of sequences with varying length (Zipf distribution/flat distribution) on implicit learning and explicit ratings of grammaticality. Preliminary results show that a Zipf-shaped and semantically biased input facilitates grammar learnability. Thus, this project contributes to understanding how we learn complex structures with long-distance dependencies: learning may be sensitive to the specific distributional properties of the linguistic input, mirroring meaningful aspects of the world and favoring short utterances.