English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Relation learning in a neurocomputational architecture supports cross-domain transfer

Doumas, L. A. A., Martin, A. E., & Hummel, J. E. (2020). Relation learning in a neurocomputational architecture supports cross-domain transfer. In S. Denison, M. Mack, Y. Xu, & B. C. Armstrong (Eds.), Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020) (pp. 932-937). Montreal, QB: Cognitive Science Society.

Item is

Basic

show hide
Genre: Conference Paper

Files

show Files
hide Files
:
Doumas_etal_2020.pdf (Publisher version), 408KB
Name:
Doumas_etal_2020.pdf
Description:
-
OA-Status:
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
2020
Copyright Info:
©2020 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY).

Locators

show

Creators

show
hide
 Creators:
Doumas, Leonidas A. A.1, Author
Martin, Andrea E.2, Author           
Hummel, John E.3, Author
Affiliations:
1University of Edinburgh, Edinburgh, UK, ou_persistent22              
2Language and Computation in Neural Systems, MPI for Psycholinguistics, Max Planck Society, ou_3217300              
3University of Illinois, Urbana, IL, USA, ou_persistent22              

Content

show
hide
Free keywords: -
 Abstract: Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning have begun to approximate and even surpass human performance, but these systems struggle to generalize what they have learned to untrained situations. We present a model based on wellestablished neurocomputational principles that demonstrates human-level generalisation. This model is trained to play one video game (Breakout) and performs one-shot generalisation to a new game (Pong) with different characteristics. The model
generalizes because it learns structured representations that are functionally symbolic (viz., a role-filler binding calculus) from unstructured training data. It does so without feedback, and without requiring that structured representations are specified a priori. Specifically, the model uses neural co-activation to discover which characteristics of the input are invariant and to learn relational predicates, and oscillatory regularities in network firing to bind predicates to arguments. To our knowledge,
this is the first demonstration of human-like generalisation in a machine system that does not assume structured representa-
tions to begin with.

Details

show
hide
Language(s): eng - English
 Dates: 2020-07
 Publication Status: Published online
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: -
 Degree: -

Event

show
hide
Title: the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020)
Place of Event: Toronto, Canada
Start-/End Date: 2020-07-29 - 2020-08-01

Legal Case

show

Project information

show

Source 1

show
hide
Title: Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society (CogSci 2020)
Source Genre: Proceedings
 Creator(s):
Denison, Stephanie, Editor
Mack, Michael, Editor
Xu, Yang, Editor
Armstrong, Blair C., Editor
Affiliations:
-
Publ. Info: Montreal, QB : Cognitive Science Society
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 932 - 937 Identifier: -