hide
Free keywords:
-
Abstract:
A study of colour perception shows that, when assigning colour to objects, the seeing brain takes into account subtle reflections of light between the surfaces in a scene.
For more than two centuries, scientists and artists have come up with a range of ways to demonstrate that the wavelength composition over a whole scene can affect how we perceive the colour of the individual parts of that scene. The proportion of light of each wavelength reflected from an object can be highly beneficial in detecting or recognizing objects1. However, to make use of that invariant, the visual system somehow has to discount the illuminating light, which can vary quite drastically.
This process is usually called 'colour constancy', and the degree to which it is shown depends on many factors. Some of them occur at the early stages of sensory processing2, such as local colour contrast, whereas others (for instance, colour memory) occur at higher cognitive levels3. Most computational schemes for achieving colour constancy try to decompose the overall light reaching the eye into one component that is due to the illuminant, and a second component due to the reflectance.
However, the physics of light is more complicated than the simple reflection of light from a surface into the eye, as if looking at a photograph. In a three-dimensional world, some light is reflected from one surface, but it then bounces to yet another surface from which it is reflected into the eye (Fig. 1). And so on. These indirect reflections are called 'inter-reflections', and are of especial interest to those involved in computer graphics and computer vision4. For example, computer simulations of indoor scenes appear more realistic when inter-reflections are taken into account. Indeed, many recent advances in computer graphics are due to the discovery of efficient algorithms to calculate the effects of all such multiple bounces of light among the vast number of surfaces typically contained in a scene.