English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

Bitzer, S., & Kiebel, S. (2012). Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks. Biological Cybernetics, 106(4-5), 201-217. doi:10.1007/s00422-012-0490-x.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0012-10C4-0 Version Permalink: http://hdl.handle.net/21.11116/0000-0003-CF75-1
Genre: Journal Article

Files

show Files
hide Files
:
Bitzer_2012_Recognizing.pdf (Publisher version), 713KB
Name:
Bitzer_2012_Recognizing.pdf
Description:
-
Visibility:
Public
MIME-Type / Checksum:
application/pdf / [MD5]
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show

Creators

show
hide
 Creators:
Bitzer, Sebastian1, Author              
Kiebel, Stefan1, Author              
Affiliations:
1Department Neurology, MPI for Human Cognitive and Brain Sciences, Max Planck Society, ou_634549              

Content

show
hide
Free keywords: Recurrent neural networks; Bayesian inference; Nonlinear dynamics; Human motion
 Abstract: Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics.

Details

show
hide
Language(s): eng - English
 Dates: 2011-06-222012-04-192012-05-122012-07-01
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1007/s00422-012-0490-x
PMID: 22581026
Other: Epub 2012
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Biological Cybernetics
  Other : Biol. Cybern.
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Berlin : Springer
Pages: - Volume / Issue: 106 (4-5) Sequence Number: - Start / End Page: 201 - 217 Identifier: ISSN: 0340-1200
CoNE: https://pure.mpg.de/cone/journals/resource/954927549307