Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Bericht

Sparse Multiscale Gaussian Process Regression

MPG-Autoren
/persons/resource/persons84294

Walder,  C
Max Planck Institute for Biological Cybernetics, Max Planck Society;
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84014

Kim,  KI
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons84193

Schölkopf,  B
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

Externe Ressourcen
Es sind keine externen Ressourcen hinterlegt
Volltexte (beschränkter Zugriff)
Für Ihren IP-Bereich sind aktuell keine Volltexte freigegeben.
Volltexte (frei zugänglich)

MPIK-TR-162.pdf
(Verlagsversion), 330KB

Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Walder, C., Kim, K., & Schölkopf, B.(2007). Sparse Multiscale Gaussian Process Regression (162). Tübingen, Germany: Max Planck Institute for Biological Cybernetics.


Zitierlink: https://hdl.handle.net/11858/00-001M-0000-0013-CC4F-6
Zusammenfassung
Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their
computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs
fixed. We generalise this for the case of Gaussian covariance function, by basing our computations on m Gaussian
basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis
functions and any given criteria, this additional flexibility permits approximations no worse and typically better
than was previously possible. Although we focus on g.p. regression, the central idea is applicable to all kernel
based algorithms, such as the support vector machine. We perform gradient based optimisation of the marginal
likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various
other sparse g.p. methods. Our approach outperforms the other methods, particularly for the case of very few basis
functions, i.e. a very high sparsity ratio.