English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Analysis of Some Methods for Reduced Rank Gaussian Process Regression

Quinonero Candela, J., & Rasmussen, C. (2005). Analysis of Some Methods for Reduced Rank Gaussian Process Regression. In R. Murray-Smith, & R. Shorten (Eds.), Switching and Learning in Feedback Systems: European Summer School on Multi-Agent Control, Maynooth, Ireland, September 8-10, 2003 (pp. 98-127). Berlin, Germany: Springer.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-D6D5-3 Version Permalink: http://hdl.handle.net/21.11116/0000-0005-0DD3-F
Genre: Conference Paper

Files

show Files

Locators

show

Creators

show
hide
 Creators:
Quinonero Candela, J, Author              
Rasmussen, CE1, 2, Author              
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              

Content

show
hide
Free keywords: -
 Abstract: While there is strong motivation for using Gaussian Processes (GPs) due to their excellent performance in regression and classification problems, their computational complexity makes them impractical when the size of the training set exceeds a few thousand cases. This has motivated the recent proliferation of a number of cost-effective approximations to GPs, both for classification and for regression. In this paper we analyze one popular approximation to GPs for regression: the reduced rank approximation. While generally GPs are equivalent to infinite linear models, we show that Reduced Rank Gaussian Processes (RRGPs) are equivalent to finite sparse linear models. We also introduce the concept of degenerate GPs and show that they correspond to inappropriate priors. We show how to modify the RRGP to prevent it from being degenerate at test time. Training RRGPs consists both in learning the covariance function hyperparameters and the support set. We propose a method for learning hyperparameters for a given support set. We also review the Sparse Greedy GP (SGGP) approximation (Smola and Bartlett, 2001), which is a way of learning the support set for given hyperparameters based on approximating the posterior. We propose an alternative method to the SGGP that has better generalization capabilities. Finally we make experiments to compare the different ways of training a RRGP. We provide some Matlab code for learning RRGPs.

Details

show
hide
Language(s):
 Dates: 2005
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: DOI: 10.1007/978-3-540-30560-6_4
BibTex Citekey: 2745
 Degree: -

Event

show
hide
Title: European Summer School on Multi-Agent Control 2003
Place of Event: Maynooth, Ireland
Start-/End Date: 2005-09-08 - 2005-09-10

Legal Case

show

Project information

show

Source 1

show
hide
Title: Switching and Learning in Feedback Systems: European Summer School on Multi-Agent Control, Maynooth, Ireland, September 8-10, 2003
Source Genre: Proceedings
 Creator(s):
Murray-Smith, R, Editor
Shorten, R, Editor
Affiliations:
-
Publ. Info: Berlin, Germany : Springer
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 98 - 127 Identifier: ISBN: 978-3-540-24457-8

Source 2

show
hide
Title: Lecture Notes in Computer Science
Source Genre: Series
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 3355 Sequence Number: - Start / End Page: - Identifier: -