Help Privacy Policy Disclaimer
  Advanced SearchBrowse




Conference Paper

Learning Output Kernels with Block Coordinate Descent


Gehler,  Peter
Computer Vision and Multimodal Computing, MPI for Informatics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available

Dinuzzo, F., Ong, C. S., Gehler, P., & Pillonetto, G. (2011). Learning Output Kernels with Block Coordinate Descent. In L. Getoor, & T. Scheffer (Eds.), Proceedings of the 28th Internationl Conference on Machine Learning (pp. 49-56). Madison, WI: Omnipress. Retrieved from http://www.icml-2011.org/papers/54_icmlpaper.pdf.

Cite as: https://hdl.handle.net/11858/00-001M-0000-0010-12DF-6
We propose a method to learn simultaneously a vector-valued function and a kernel between its components. The obtained kernel can be used both to improve learning performance and to reveal structures in the output space which may be important in their own right. Our method is based on the solution of a suitable regularization problem over a reproducing kernel Hilbert space of vector-valued functions. Although the regularized risk functional is non-convex, we show that it is invex, implying that all local minimizers are global minimizers. We derive a block-wise coordinate descent method that efficiently exploits the structure of the objective functional. Then, we empirically demonstrate that the proposed method can improve classification accuracy. Finally, we provide a visual interpretation of the learned kernel matrix for some well known datasets.