English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Sparse Multiscale Gaussian Process Regression

Walder, C., Kim, K., & Schölkopf, B. (2008). Sparse Multiscale Gaussian Process Regression. In W. Cohen, A. McCallum, & S. Roweis (Eds.), ICML '08: Proceedings of the 25th international conference on Machine learning (pp. 1112-1119). New York, NY, USA: ACM Press.

Item is

Files

show Files

Locators

show
hide
Description:
-
OA-Status:

Creators

show
hide
 Creators:
Walder, C1, 2, 3, Author           
Kim, KI2, 4, Author           
Schölkopf, B2, 4, Author           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              
3Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_2528702              
4Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              

Content

show
hide
Free keywords: -
 Abstract: Most existing sparse Gaussian process (g.p.)
models seek computational advantages by
basing their computations on a set of m basis
functions that are the covariance function of
the g.p. with one of its two inputs fixed. We
generalise this for the case of Gaussian covariance
function, by basing our computations on
m Gaussian basis functions with arbitrary diagonal
covariance matrices (or length scales).
For a fixed number of basis functions and
any given criteria, this additional flexibility
permits approximations no worse and typically
better than was previously possible.
We perform gradient based optimisation of
the marginal likelihood, which costs O(m2n)
time where n is the number of data points,
and compare the method to various other
sparse g.p. methods. Although we focus on
g.p. regression, the central idea is applicable
to all kernel based algorithms, and we also
provide some results for the support vector
machine (s.v.m.) and kernel ridge regression
(k.r.r.). Our approach outperforms the other
methods, particularly for the case of very few
basis functions, i.e. a very high sparsity ratio.

Details

show
hide
Language(s):
 Dates: 2008-07
 Publication Status: Issued
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1145/1390156.1390296
BibTex Citekey: 5121
 Degree: -

Event

show
hide
Title: 25th International Conference on Machine Learning (ICML 2008)
Place of Event: Helsinki, Finland
Start-/End Date: 2008-07-05 - 2008-07-09

Legal Case

show

Project information

show

Source 1

show
hide
Title: ICML '08: Proceedings of the 25th international conference on Machine learning
Source Genre: Proceedings
 Creator(s):
Cohen, WW, Editor
McCallum, A, Editor
Roweis, ST, Editor
Affiliations:
-
Publ. Info: New York, NY, USA : ACM Press
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 1112 - 1119 Identifier: ISBN: 978-1-60558-205-4