English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Gradient based hyperparameter optimization in Echo State Networks

MPS-Authors
/persons/resource/persons173613

Parlitz,  Ulrich
Research Group Biomedical Physics, Max Planck Institute for Dynamics and Self-Organization, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Thiede, L. A., & Parlitz, U. (2019). Gradient based hyperparameter optimization in Echo State Networks. Neural Networks, 115, 23-29. doi:10.1016/j.neunet.2019.02.001.


Cite as: https://hdl.handle.net/21.11116/0000-0003-99C3-4
Abstract
Like most machine learning algorithms, Echo State Networks possess several hyperparameters that have to be carefully tuned for achieving best performance. For minimizing the error on a specific task, we present a gradient based optimization algorithm, for the input scaling, the spectral radius, the leaking rate, and the regularization parameter.