Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Konferenzbeitrag

Sparse Multiscale Gaussian Process Regression

MPG-Autoren
/persons/resource/persons84294

Walder,  C
Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84014

Kim,  KI
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PuRe verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Walder, C., Kim, K., & Schölkopf, B. (2008). Sparse Multiscale Gaussian Process Regression. In W. Cohen, A. McCallum, & S. Roweis (Eds.), ICML '08: Proceedings of the 25th international conference on Machine learning (pp. 1112-1119). New York, NY, USA: ACM Press.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-C841-F
Zusammenfassung
Most existing sparse Gaussian process (g.p.)
models seek computational advantages by
basing their computations on a set of m basis
functions that are the covariance function of
the g.p. with one of its two inputs fixed. We
generalise this for the case of Gaussian covariance
function, by basing our computations on
m Gaussian basis functions with arbitrary diagonal
covariance matrices (or length scales).
For a fixed number of basis functions and
any given criteria, this additional flexibility
permits approximations no worse and typically
better than was previously possible.
We perform gradient based optimisation of
the marginal likelihood, which costs O(m2n)
time where n is the number of data points,
and compare the method to various other
sparse g.p. methods. Although we focus on
g.p. regression, the central idea is applicable
to all kernel based algorithms, and we also
provide some results for the support vector
machine (s.v.m.) and kernel ridge regression
(k.r.r.). Our approach outperforms the other
methods, particularly for the case of very few
basis functions, i.e. a very high sparsity ratio.