非表示:
キーワード:
-
要旨:
Humans are generally pretty good at visually estimating the 3-D shape of objects. However, under some circumstances we are subject to illusions. For example, in 'shape from shading', certain illumination conditions can systematically alter perceived 3-D shape. Similarly, in 'shape from texture', certain textures can induce systematic misperceptions of shape. Most computational theories of 'shape-from-x' focus on achieving accurate shape reconstruction. However, a good model of human vision, should account for the pattern or errors as well as successes. Here, in a series of gauge-figure and similarity rating tasks, we measure how perceived shape changes across variations in illumination, surface reflectance, texture, and certain shape transformations. We then show how a number of simple image statistics derived from filters tuned to different orientations and scales qualitatively predict the pattern of both successes and errors. Importantly, this shows how both similarities and differences between cues (such as shading, highlights, and texture) might be explained by a common front-end.