Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Conference Paper

Algorithmic Stability and Generalization Performance

There are no MPG-Authors in the publication available
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Bousquet, O., & Elisseeff, A. (2001). Algorithmic Stability and Generalization Performance. In T. Leen, T. Dietterich, & V. Tresp (Eds.), Advances in Neural Information Processing Systems 13 (pp. 196-202). Cambridge, MA, USA: MIT Press.

Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-E2AA-C
We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorithms, explicitly using their stability properties. A \em stable learner being one for which the learned solution does not change much for small changes in the training set. The bounds we obtain do not depend on any measure of the complexity of the hypothesis space (e.g. VC dimension) but rather depend on how the learning algorithm searches this space, and can thus be applied even when the VC dimension in infinite.
We demonstrate that regularization networks possess the required stability property and apply our method to obtain new bounds on their generalization performance.