Abstract
Traditional models of selection using saliency maps assume that visual inputs are processed by separate feature maps whose outputs are subsequently added to form a master saliency map. A recent hypothesis (Li, TICS 6:9-16, 2002) that V1 implements a saliency map requires no separate feature maps. Rather, saliency at a visual location corresponds to the activity of the most active V1 cell responding to inputs there, regardless of its feature tuning. We test the models using texture segmentation and visual search tasks. Texture borders in Fig. A and B pop out due to higher saliency of the bars at the borders. Traditional models predict easier texture segmentation in pattern C (created by superposing A and B) than in A and B, while the V1 model does not. Traditional models predict no interference of the component pattern D in segmenting pattern E which is created by superposing A and D, while the V1 model predicts interference. Using reaction time as a measure of the task difficulty, the V1 models predictions were confirmed. Analogous results were found in search tasks for orientation singletons in stimuli of target and distractors made of single or composite bars. The V1 model was also confirmed using stimuli made of color-orientation feature composites.