Deutsch
 
Hilfe Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

 
 
DownloadE-Mail
  Sparse Multiscale Gaussian Process Regression

Walder, C., Kim, K., & Schölkopf, B. (2008). Sparse Multiscale Gaussian Process Regression. In W. Cohen, A. McCallum, & S. Roweis (Eds.), ICML '08: Proceedings of the 25th international conference on Machine learning (pp. 1112-1119). New York, NY, USA: ACM Press.

Item is

Externe Referenzen

einblenden:
ausblenden:
externe Referenz:
https://dl.acm.org/citation.cfm?doid=1390156.1390296 (Verlagsversion)
Beschreibung:
-
OA-Status:

Urheber

einblenden:
ausblenden:
 Urheber:
Walder, C1, 2, 3, Autor           
Kim, KI2, 4, Autor           
Schölkopf, B2, 4, Autor           
Affiliations:
1Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497797              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              
3Project group: Cognitive Engineering, Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_2528702              
4Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              

Inhalt

einblenden:
ausblenden:
Schlagwörter: -
 Zusammenfassung: Most existing sparse Gaussian process (g.p.)
models seek computational advantages by
basing their computations on a set of m basis
functions that are the covariance function of
the g.p. with one of its two inputs fixed. We
generalise this for the case of Gaussian covariance
function, by basing our computations on
m Gaussian basis functions with arbitrary diagonal
covariance matrices (or length scales).
For a fixed number of basis functions and
any given criteria, this additional flexibility
permits approximations no worse and typically
better than was previously possible.
We perform gradient based optimisation of
the marginal likelihood, which costs O(m2n)
time where n is the number of data points,
and compare the method to various other
sparse g.p. methods. Although we focus on
g.p. regression, the central idea is applicable
to all kernel based algorithms, and we also
provide some results for the support vector
machine (s.v.m.) and kernel ridge regression
(k.r.r.). Our approach outperforms the other
methods, particularly for the case of very few
basis functions, i.e. a very high sparsity ratio.

Details

einblenden:
ausblenden:
Sprache(n):
 Datum: 2008-07
 Publikationsstatus: Erschienen
 Seiten: -
 Ort, Verlag, Ausgabe: -
 Inhaltsverzeichnis: -
 Art der Begutachtung: -
 Identifikatoren: DOI: 10.1145/1390156.1390296
BibTex Citekey: 5121
 Art des Abschluß: -

Veranstaltung

einblenden:
ausblenden:
Titel: 25th International Conference on Machine Learning (ICML 2008)
Veranstaltungsort: Helsinki, Finland
Start-/Enddatum: 2008-07-05 - 2008-07-09

Entscheidung

einblenden:

Projektinformation

einblenden:

Quelle 1

einblenden:
ausblenden:
Titel: ICML '08: Proceedings of the 25th international conference on Machine learning
Genre der Quelle: Konferenzband
 Urheber:
Cohen, WW, Herausgeber
McCallum, A, Herausgeber
Roweis, ST, Herausgeber
Affiliations:
-
Ort, Verlag, Ausgabe: New York, NY, USA : ACM Press
Seiten: - Band / Heft: - Artikelnummer: - Start- / Endseite: 1112 - 1119 Identifikator: ISBN: 978-1-60558-205-4