非表示:
キーワード:
-
要旨:
Our visual environment is highly structured, generating complex statistical dependencies in natural scenes. In the framework of information theory, these dependencies of input variables generate redundancy in the input. Removing the redundancies would lead to the construction of an efficient code for natural scenes. A number of computational algorithms have been developed that are based on the assumption of redundancy reduction. Four of these algorithms were used here that are often discussed in the context of natural scene statistics. These algorithms were Principal Component Analysis, Independent Component Analysis, Sparsenet and Non-Negative-Matrix Factorization. Each of the algorithms decomposes the natural scenes into basis features or functions. The input can then be reconstructed again by a linear combination of these basis functions. Thus, the decomposition algorithms can be seen as performing filtering of the input data, such that the basis functions filter statistical dependencies. Because each of the four algorithms has a different statistical criterion for constructing the basis functions, each of them filters different types of statistics in the natural images. Usually, the performance of these decomposition algorithms is evaluated by calculating analytically defined measures, e.g. the reconstruction error between the original and the reconstructed image or the redundancy reduction achieved with the decomposition. Here, in addition to the calculation of reconstruction errors, we tested the algorithms in three psychophysical experiments. In each of the experiments, subjects had to match reconstructed images to their originals in a delayed match-to-sample paradigm. The performance of subjects in this task depends on the ability of the algorithms to preserve the information present in the original scenes. The three experiments tested different properties of the algorithms. The first experiment was concerned with the dependence of the psychophysical performance on the number of basis functions used in the reconstruction. Since each of the algorithm derives its basis functions by applying a statistical criterion to a set of training images, it is an important question how general these basis functions are. This was tested in the second experiment. Finally, the third experiment assessed the robustness of the algorithms against noise added in the reconstruction process. This evaluation of decomposition algorithms in a more natural context provides new insights that can extend the computational results. It also helps to clarify which statistical criteria are of importance in natural vision as opposed to purely computational purposes.