English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  A Bayesian attractor network with incremental learning

Sandberg, A., Lansner, A., Petersson, K. M., & Ekeberg, Ö. (2002). A Bayesian attractor network with incremental learning. Network: Computation in Neural Systems, 13(2), 179-194. doi:10.1088/0954-898X/13/2/302.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Sandberg, A. 1, Author
Lansner, A. 1, Author
Petersson, Karl Magnus2, Author           
Ekeberg, Ö1, Author
Affiliations:
1Department of Numerical Analysis and Computing Science, Royal Institute of Technology, 100 44 Stockholm, Sweden.
2Cognitive Neurophysiology Research Group, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.

Content

show
hide
Free keywords: -
 Abstract: A realtime online learning system with capacity limits needs to gradually forget old information in order to avoid catastrophic forgetting. This can be achieved by allowing new information to overwrite old, as in a so-called palimpsest memory. This paper describes an incremental learning rule based on the Bayesian confidence propagation neural network that has palimpsest properties when employed in an attractor neural network. The network does not suffer from catastrophic forgetting, has a capacity dependent on the learning time constant and exhibits faster convergence for newer patterns.

Details

show
hide
Language(s): eng - English
 Dates: 2002
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1088/0954-898X/13/2/302
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Network: Computation in Neural Systems
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 13 (2) Sequence Number: - Start / End Page: 179 - 194 Identifier: -