Abstract
Visual search is the task of finding a target among distractors. When the target has a feature that is absent in the distractors, the search can be very efficient, and is termed feature search, if the feature is in a basic feature dimension like color, orientation, depth, and motion direction (Treisman and Gelade, Cog. Psychol. 1980). When the target is only distinguishable by a particular conjunction of features, e.g., green and vertical, each of which is present in the distractors, the search is termed a conjunction search. Some conjunction searches, e.g., conjunctions of depth-orientation (Nakayama and Silverman, Nature 1986) and motion-orientation (Mcleod et al, Nature 1988), can be efficient, while others, such as color-orientation, may be difficult depending on the stimuli (Treismand and Gelade 1980, Wolfe, Vis. Res. 1992). Double feature searches are those for which the target differs from distractors in more than one feature dimensions, e.g. a green-vertical target bar among red-horizontal distractor bars. They should be no less efficient than the two corresponding single feature searches (e.g., a green target bar among red distractor bars or a vertical target bar among horizontal distractor bars). The double feature advantage is stronger for some double features, such as motion-orientation, than others, such as color-orientation (Nothdurft Vis. Res. 2000). I use a V1 model to show how various efficiencies in these search tasks can be understood from a saliency map constructed by V1 (Li, TICS 2002). Contextual influences make V1's responses increase with the stimuli's saliencies, which determine the search efficiency. The model shows that a conjunction or double feature search is more efficient if cells tuned to the conjunction of features are present in V1 and if the intra-cortical connections preferentially link cells tuned to similar feature values in both feature dimensions. Our model links psychophysics with physiology and provides testable predictions.