English
 
User Manual Privacy Policy Disclaimer Contact us
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Learning output kernels with block coordinate descent

MPS-Authors
/persons/resource/persons83886

Dinuzzo,  F.
Dept. Empirical Inference, Max Planck Institute for Intelligent Systems, Max Planck Society;

Ong,  C. S.
Max Planck Society;

/persons/resource/persons44483

Gehler,  P. V.
Dept. Perceiving Systems, Max Planck Institute for Intelligent Systems, Max Planck Society;

External Ressource
No external resources are shared
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Dinuzzo, F., Ong, C. S., Gehler, P. V., & Pillonetto, G. (2011). Learning output kernels with block coordinate descent. In L. Gerloor, & T. Scheffer (Eds.), Proceedings of the 28th International Conference on Machine Learning (pp. 49-56).


Cite as: http://hdl.handle.net/11858/00-001M-0000-0010-4C9F-5
Abstract
We propose a method to learn simultaneously a vector-valued function and a kernel between its components. The obtained kernel can be used both to improve learning performances and to reveal structures in the output space which may be important in their own right. Our method is based on the solution of a suitable regularization problem over a reproducing kernel Hilbert space (RKHS) of vector-valued functions. Although the regularized risk functional is non-convex, we show that it is invex, implying that all local minimizers are global minimizers. We derive a block-wise coordinate descent method that efficiently exploits the structure of the objective functional. Then, we empirically demonstrate that the proposed method can improve classification accuracy. Finally, we provide a visual interpretation of the learned kernel matrix for some well known datasets.