hide
Free keywords:
-
Abstract:
In many domains, reliable a priori knowledge exists that may be used to improve classifier performance. For example in handwritten digit recognition, such a priori knowledge may include classification invariance with respect to image translations and rotations. In this paper, we present a new generalisation of the Support Vector Machine (SVM) that aims to better incorporate this knowledge. The method is an extension of the Virtual SVM, and penalises an approximation of the variance of the decision function across each grouped set of "virtual examples", thus utilising the fact that these groups should ideally be assigned similar class membership probabilities. The method is shown to be an efficient approximation of the invariant SVM of Chapelle and Sch¨olkopf, with the advantage that it can be solved by trivial modification to standard SVM optimization packages and negligible increase in computational complexity when compared with the Virtual SVM. The efficacy of the method is demonstrated on a simple problem.