hide
Free keywords:
-
Abstract:
It is well known that when a smooth matte surface is illuminated by a point-light source such as the sun, the visual system may infer surface shape but only up to a twofold ambiguity. This ambiguity corresponds to a depth reversal. When the light source is diffuse, however, this ambiguity does not exist. The reason is that each convexity on the surface receives more illumination than its neighbouring concavity, and thus the convexity is brighter. Recently, it was shown that the visual system can use shading under diffuse lighting to infer qualitative shape (Langer and Bülthoff, 1997 ARVO) and that, for the class of surfaces tested, observers performed better under diffuse lighting than under point source lighting. The question arises, however, whether observers in that study were merely using a simple default model that 'dark means deep' under the diffuse condition, or whether they used a model more closely related to the actual physics of shading. Here we present results from a control experiment in which we asked observers to judge the relative depth of two nearby points. Performance was better at this task than a 'dark means deep' model would have predicted. This implies that the visual system uses a more accurate shape-from-shading model under diffuse lighting than was previously thought. A computational model of shape-from-shading under diffuse lighting offers some insights into how a visual system might achieve the improved performance under diffuse lighting.