English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Kernel Choice and Classifiability for RKHS Embeddings of Probability Distributions

Sriperumbudur, B., Fukumizu, K., Gretton, A., Lanckriet, G., & Schölkopf, B. (2010). Kernel Choice and Classifiability for RKHS Embeddings of Probability Distributions. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, & A. Culotta (Eds.), Advances in Neural Information Processing Systems 22 (pp. 1750-1758). Red Hook, NY, USA: Curran.

Item is

Files

show Files

Creators

show
hide
 Creators:
Sriperumbudur, BK, Author           
Fukumizu, K, Author           
Gretton, A1, 2, Author           
Lanckriet, GRG, Author
Schölkopf, B1, 2, Author           
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: Embeddings of probability measures into reproducing kernel Hilbert spaces have been proposed as a straightforward and practical means of representing and comparing probabilities. In particular, the distance between embeddings (the maximum mean discrepancy, or MMD) has several key advantages over many classical metrics on distributions, namely easy computability, fast convergence and low bias of finite sample estimates. An important requirement of the embedding RKHS is that it be characteristic: in this case, the MMD between two distributions is zero if and only if the distributions coincide. Three new results on the MMD are introduced
in the present study. First, it is established that MMD corresponds to the optimal risk of a kernel classifier, thus forming a natural link between the distance between distributions and their ease of classification. An important consequence is that a kernel must be characteristic to guarantee classifiability between distributions in the RKHS. Second, the class of characteristic kernels is broadened to incorporate all strictly positive definite kernels: these include non-translation invariant kernels and kernels on non-compact domains. Third, a generalization of
the MMD is proposed for families of kernels, as the supremum over MMDs on a class of kernels (for instance the Gaussian kernels with different bandwidths). This extension is necessary to obtain a single distance measure if a large selection or class of characteristic kernels is potentially appropriate. This generalization is reasonable, given that it corresponds to the problem of learning the kernel by minimizing the risk of the corresponding kernel classifier. The generalized MMD is shown to have consistent finite sample estimates, and its performance is demonstrated
on a homogeneity testing example.

Details

show
hide
Language(s):
 Dates: 2010-04
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: 6131
 Degree: -

Event

show
hide
Title: 23rd Annual Conference on Neural Information Processing Systems (NIPS 2009)
Place of Event: Vancouver, BC, Canada
Start-/End Date: 2009-12-07 - 2009-12-10

Legal Case

show

Project information

show

Source 1

show
hide
Title: Advances in Neural Information Processing Systems 22
Source Genre: Proceedings
 Creator(s):
Bengio, Y, Editor
Schuurmans, D, Editor
Lafferty, J, Editor
Williams, C, Editor
Culotta, A, Editor
Affiliations:
-
Publ. Info: Red Hook, NY, USA : Curran
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 1750 - 1758 Identifier: ISBN: 978-1-615-67911-9