English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Gradient based hyperparameter optimization in Echo State Networks

Thiede, L. A., & Parlitz, U. (2019). Gradient based hyperparameter optimization in Echo State Networks. Neural Networks, 115, 23-29. doi:10.1016/j.neunet.2019.02.001.

Item is

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Thiede, L. A., Author
Parlitz, Ulrich1, Author           
Affiliations:
1Research Group Biomedical Physics, Max Planck Institute for Dynamics and Self-Organization, Max Planck Society, ou_2063288              

Content

show
hide
Free keywords: Echo State Network; Hyperparameters; Reservoir computing
 Abstract: Like most machine learning algorithms, Echo State Networks possess several hyperparameters that have to be carefully tuned for achieving best performance. For minimizing the error on a specific task, we present a gradient based optimization algorithm, for the input scaling, the spectral radius, the leaking rate, and the regularization parameter.

Details

show
hide
Language(s): eng - English
 Dates: 2019-03-082019-07
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: DOI: 10.1016/j.neunet.2019.02.001
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Neural Networks
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 115 Sequence Number: - Start / End Page: 23 - 29 Identifier: -