hide
Free keywords:
-
Abstract:
Our todays understanding of how neurons in the early visual system respond to the light intensity patterns on the retina can be described basically in terms of firing rates and linear filtering (plus pointwise nonlinearities). It has been suggested that the purpose of this filtering is to represent the retinal image by the activition pattern of statistically less dependent features (i. e. redundancy reduction). In particular, the filters found with independent component analysis (ICA) for natural images resemble important properties of simple cells in striate cortex: they are localized, oriented, and bandpass.
In contrast to the many possible second-order decorrelation transforms, ICA returns a unique answer by additionally optimizing for higher-order correlations. However, it has never been tested quantitatively how large the additional gain of ICA is compared with second-order methods. Here, we estimate the gain in statistical independence (the multi-information reduction) achieved with ICA, principal component analysis (PCA), zero-phase whitening, and predictive coding. A randomly sampled whitening basis and the Haar wavelet are included into the comparison as well. The comparison of all these methods is carried out for different patch sizes, ranging from 2x2 to 16x16 pixels. In spite of large differences in the shape of the basis functions, we find only small differences in the multi-information between all decorrelation transforms (5% or less) for all patch sizes. Among the second-order methods, PCA is optimal for small patch sizes and predictive coding performs best for large patch sizes. In summary, the `edge filters' found with ICA lead only to a surprisingly small improvement in terms of its actual objective, and we conclude that a restriction to linear filtering does not line up well with the idea of higher-order decorrelation. In addition, psychophysical data is presented which further corroborates this conclusion.