English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
  Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection

Tsuda, K., Rätsch, G., & Warmuth, M. (2005). Matrix Exponentiated Gradient Updates for On-line Learning and Bregman Projection. The Journal of Machine Learning Research, 6, 995-1018.

Item is

Basic

show hide
Item Permalink: http://hdl.handle.net/11858/00-001M-0000-0013-D54F-C Version Permalink: http://hdl.handle.net/21.11116/0000-0004-D7B5-D
Genre: Journal Article

Files

show Files

Locators

show
hide
Description:
-

Creators

show
hide
 Creators:
Tsuda, K1, 2, Author              
Rätsch, G3, Author              
Warmuth, M, Author
Affiliations:
1Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society, ou_1497795              
2Max Planck Institute for Biological Cybernetics, Max Planck Society, Spemannstrasse 38, 72076 Tübingen, DE, ou_1497794              
3Friedrich Miescher Laboratory, Max Planck Society, Max-Planck-Ring 9, 72076 Tübingen, DE, ou_2575692              

Content

show
hide
Free keywords: -
 Abstract: We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: on-line learning with a simple square loss, and finding a symmetric positive definite matrix subject to linear constraints. The updates generalize the exponentiated gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the derivation and the analyses of the original EG update and AdaBoost generalize to the non-diagonal case. We apply the resulting matrix exponentiated gradient (MEG) update and DefiniteBoost to the problem of learning a kernel matrix from distance measurements.

Details

show
hide
Language(s):
 Dates: 2005-06
 Publication Status: Published in print
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: -
 Identifiers: BibTex Citekey: 4143
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: The Journal of Machine Learning Research
Source Genre: Journal
 Creator(s):
Affiliations:
Publ. Info: Cambridge, MA : MIT Press
Pages: - Volume / Issue: 6 Sequence Number: - Start / End Page: 995 - 1018 Identifier: ISSN: 1532-4435
CoNE: https://pure.mpg.de/cone/journals/resource/111002212682020_1