hide
Free keywords:
-
Abstract:
Translating the information carried by the retinal image into a more useful representation is generally thought of as being an important goal of early processing in biological vision. During the last decade many models have been proposed which aim at deriving filters whose shapes resemble prominent properties of receptive fields in the early visual pathways. The filters in these models are
determined by optimizing a certain objective function. While the use of optimalityprinciples seems to imply a superior performance of these filters for scene analysis
and object recognition, it lacks thorough verification of this supposition. Minimization of the statistical higher-order dependencies between the filter outputs
(ICA) has been used to derive localized, oriented, and bandpass filters, resembling the receptive fields of simple cells in V1. A quantitative analysis of the reduction of
statistical dependencies achieved with this model, however, reveals only a small improvement in comparison with arbitrary second-order decorrelation filters. In addition, I will present psychophysical results, showing that, perceptually, the independent components of natural images exhibit even more dependencies than the non-localized basis functions of the discrete cosine transform used in image
compression. Finally, I will present a slightly different objective which similarly leads to localized, oriented, and bandpass image filters but rather seeks to divide the sensory information into different clusters of similar image content.