English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Leave One Out Error, Stability, and Generalization of Voting Combinations of Classifiers

MPS-Authors
/persons/resource/persons83901

Elisseeff,  A
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Evgeniou, T., Pontil, M., & Elisseeff, A. (2004). Leave One Out Error, Stability, and Generalization of Voting Combinations of Classifiers. Machine Learning, 55(1), 71-97. doi:10.1023/B:MACH.0000019805.88351.60.


Cite as: http://hdl.handle.net/21.11116/0000-0005-4F4B-0
Abstract
We study the leave-one-out and generalization errors of voting combinations of learning machines. A special case considered is a variant of bagging. We analyze in detail combinations of kernel machines, such as support vector machines, and present theoretical estimates of their leave-one-out error. We also derive novel bounds on the stability of combinations of any classifiers. These bounds can be used to formally show that, for example, bagging increases the stability of unstable learning machines. We report experiments supporting the theoretical findings.