hide
Free keywords:
-
Abstract:
We formulate the metric learning problem as that of minimizing the differential
relative entropy between two multivariate Gaussians under constraints on the
Mahalanobis distance function. Via a surprising equivalence, we show that this
problem can be solved as a low-rank kernel learning problem. Specifically, we
minimize the Burg divergence of a low-rank kernel to an input kernel, subject to
pairwise distance constraints. Our approach has several advantages over existing
methods. First, we present a natural information-theoretic formulation for the
problem. Second, the algorithm utilizes the methods developed by Kulis et al.
[6], which do not involve any eigenvector computation; in particular, the running
time of our method is faster than most existing techniques. Third, the formulation
offers insights into connections between metric learning and kernel learning.