English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Conference Paper

Fast Kernel ICA using an Approximate Newton Method

MPS-Authors
/persons/resource/persons83994

Jegelka,  S
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

/persons/resource/persons83946

Gretton,  A
Department Empirical Inference, Max Planck Institute for Biological Cybernetics, Max Planck Society;
Max Planck Institute for Biological Cybernetics, Max Planck Society;

External Resource
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)
There are no public fulltexts stored in PuRe
Supplementary Material (public)
There is no public supplementary material available
Citation

Shen, H., Jegelka, S., & Gretton, A. (2007). Fast Kernel ICA using an Approximate Newton Method. In M. Meila, & X. shen (Eds.), Artificial Intelligence and Statistics, 21-24 March 2007, San Juan, Puerto Rico (pp. 476-483). Madison, WI, USA: International Machine Learning Society.


Cite as: https://hdl.handle.net/11858/00-001M-0000-0013-CE7B-0
Abstract
Recent approaches to independent component analysis (ICA) have used
kernel independence measures to obtain very good performance,
particularly where classical methods experience difficulty (for
instance, sources with near-zero kurtosis). We present Fast Kernel ICA
(FastKICA), a novel optimisation technique for one such kernel
independence measure, the Hilbert-Schmidt independence criterion
(HSIC). Our search procedure uses an approximate Newton method on the
special orthogonal group, where we estimate the Hessian locally about
independence. We employ incomplete Cholesky decomposition to
efficiently compute the gradient and approximate Hessian. FastKICA results in more accurate solutions at a given cost
compared with gradient descent, and is relatively insensitive to local minima
when initialised far from independence. These properties allow kernel approaches to be
extended to problems with larger numbers of sources and observations.
Our method is competitive with other modern and classical ICA
approaches in both speed and accuracy.