Visual search

From Scholarpedia
Jeremy Wolfe and Todd S. Horowitz (2008), Scholarpedia, 3(7):3325. doi:10.4249/scholarpedia.3325 revision #145401 [link to/cite this article]
Jump to: navigation, search

Visual search is the common task of looking for something in a cluttered visual environment. The item that the observer is searching for is termed the target, while non-target items are termed distractors.

Contents

Why search?

Many visual scenes contain more information than we can fully process all at once (Tsotsos, 1990). Accordingly, mechanisms like those subserving object recognition process only a restricted part of the visual scene at any one time. Visual attention is used to control the selection of the subset of the scene. This subset may be an array of locations, but more likely it is an object, or a small group of objects (Goldsmith, 1998). Most visual searches consist of a series of attentional deployments, which ends either when the target is found or the search is abandoned (see section 6.3).

Covert search, overt search, foraging

Overt search refers to a series of eye movements around the scene made to bring difficult-to-resolve items onto the fovea. If the relevant items in the visual scene are large enough to be identified without fixation, search can be successfully performed while the eyes are focused on a single point. Attentional shifts made during a single fixation are termed covert, because they are inferred rather than directly observed. Under laboratory conditions, many search tasks can be performed entirely with covert attention. Under real world conditions, a new point of fixation is selected 3 or 4 times per second. Overt movements of the eye and covert deployments of attention are closely related (Kowler, Anderson, Dosher, & Blaser, 1995). However, with stimuli that do not require direct foveation, 4-8 objects can be searched during each fixation. This means that either such objects are processed in parallel, or we can make several covert attentional shifts per fixation.

Measuring search performance

How well can we perform a specific search task? In a standard laboratory search task, observers are asked to search for a target in an image on a computer monitor. Such an artificial scene might subtend a region of the visual field measuring 20 degrees of visual angle (dva) by 20 dva. Observers are asked to perform several hundred trials of the search task. The number of items in the scene (set size), and thus the number of distractors, is varied from trial to trial. Typically, the target is present on half of the trials, and absent on the others. The time to make a response (reaction time, or RT) is measured, as well as the accuracy of the answer. RT increases in a roughly linear manner with set size. The slope of the RT x set size function is a standard measure of search efficiency, since it gives an estimate of search throughput in terms of items per unit time. Theoretical assumptions are needed in order to translate from slope to an actual estimate of the number of items that have been attended and processed. Without committing to a specific theoretical stance, we can say that searches with slopes near zero are efficient. For stimuli that are large enough not to require eye movements, an inefficient search is one with target-present slopes in the 20-40 msec/item range. Target absent trials tend to have slopes that are a bit more than twice as steep as the target present slopes (Wolfe, 1998) (see section 6.1 below). Much steeper slopes can be obtained if each item requires fixation prior to identification, or if each item is intrinsically difficult to classify as target or distractor.

A linear function, fitted to the RT x set size data, will have an intercept as well as a slope. That intercept will be several hundred msec long even for simple search. It reflects the components of the task that do not involve sequential deployments of attention. These components will include visual processes prior to attentional selection, as well as decision and motor components coming after the search, per se, has been completed.

This discussion of RTs assumes a situation in which errors are relatively rare, as in the case for simple searches where stimuli remain visible until observers respond. In other experimental regimes, more information is obtained from the error rates than from the RTs (Palmer et al., 2000). For example, if stimuli are briefly presented, it is error rate that will increase with set size, rather than RT (e.g., Dukewich & Klein, 2005). In general, speed and accuracy will trade off and this must be taken into account in interpreting search results. Sometimes this tradeoff is exploited as a method in its own right (e.g. Carrasco, Giordano, & McElree, 2006)

Factors that modulate search performance

You can search for anything. However, some searches will be more efficient than others. In this next large section, we will describe a number of the factors that determine search efficiency in laboratory experiments. In section 9, we will consider if these apply to real world search tasks.

Guiding attributes

For a search to be possible at all, the target must be different from the distractors in some detectable fashion. Finding a needle in a haystack will be a laborious search but it will be possible. Finding the one, specific needle in a needle stack will not be possible. Stimuli can differ from each other in a host of ways. There is a limited set of attributes that will allow a target to be found efficiently among distractors that differ in that attribute. We call these guiding attributes as they can be used to guide attention.

Earlier work would refer to these as preattentive features (Treisman & Gelade, 1980). The term preattentive is used in several ways, some of them problematic. To say that an attribute like color is preattentive seems to imply that all processing of color is done before or without attention. That is unlikely. The original use of preattentive had a spatial/neural aspect to it, implying that some brain loci were preattentive. More modern understandings suggest that an area like primary visual cortex might initially process a visual stimulus without showing an influence of attention. However, activity in the very same piece of cortex might be subsequently modulated by attention in a reentrant manner (Di Lollo, Enns, & Rensink, 2000; Lamme & Roelfsema, 2000; Saalmann, Pigarev, & Vidyasagar, 2007). The most helpful use of the term preattentive is a temporal usage. Prior to deployment of attention to an object, any visual processing of that object is, by definition, preattentive. In any case, we will use the more neutral term guiding attribute to refer to visual properties that can be used to direct deployment of attention. In this jargon, a feature (like red) is a specific instance of an attribute (like color).

Below, we reproduce a list of attributes modified from Wolfe and Horowitz (2004). These are grouped by the likelihood that they will support efficient search. Where references are not listed, they can be found in the original article. A reasonable estimate would be that there are between ten and twenty-five basic attributes that guide the deployment of attention.


Table 1: Classification of Guiding Attributes (Derived but modified from Wolfe & Horowitz (2004)).

Attribute Description Examples
Undoubted Attributes Undoubted meaning that there are a large number of studies with converging methods.
  1. Color
  2. Motion
  3. Orientation
  4. Size (incl. length & spatial freq.)
Probable Attributes Less confidence due to limited data, dissenting opinions, or the possibility of alternative explanations
  1. Luminance onset (flicker)
  2. Luminance polarity (see also Contrast (Pashler, Dobkins, & Huang, 2004))
  3. Vernier Offset
  4. Stereoscopic depth & tilt
  5. Shape – Shape is an ill-defined attribute. There are a host of other attributes that might be considered to be aspects of shape:
    1. Line termination
    2. Closure
    3. Topological status (e.g. has a hole) (L. Chen, 2005)
    4. Curvature
    5. Aspect ratio
  6. Inter-item symmetry (Roggeveen, Kingstone, & Enns, 2004; van Zoest, Giesbrecht, Enns, & Kingstone, 2006)
Possible Attributes
Still less confidence
  1. Binocular luster
  2. Expansion
  3. Number
  4. Shininess (Birnkrant, Wolfe, Kunar, & Sng, 2004)
  5. Faces (familiar, upright, angry, gaze direction, etc.) – No candidate for inclusion on the list of guiding attributes is more controversial than the human face. The literature is filled with claims for special attentional status for faces and with counter-claims that these results are artifacts or by-products of some other basic attribute. Possible seems like a judicious placement at present (for some recent work, see: Doi & Ueda, 2007; Langton, Law, Burton, & Schweinberger, 2007; Hahn, Carlson, Singer, & Gronlund, 2006; Hershler & Hochstein, 2005; Hershler & Hochstein, 2006; VanRullen, 2006)).
Doubtful cases
Unconvincing, but still possible
  1. Letter Identity (over-learned sets, in general)
  2. Alphanumeric Category – One suspects that most of the evidence for letter identity and alphanumeric category reflects our inability to adequately define shape.
Second-order Attributes This is a category of visual properties that seem to support efficient search by creating other attributes. For example, orientation is an uncontroversial feature. Orientation in the third dimension (slant) appears to support efficient search. There are many cues to depth that will produce a target of one orientation and distractors of another in the inferred third dimension. We could declare all of these to be basic attention-guiding attributes but it might be better to consider them to be properties that are analyzed in early visual stages, without the need for attention, but without the ability, in isolation, to guide attention. In some case (e.g. lighting direction), that has been shown experimentally (Ostrovsky, Cavanagh, & Sinha, 2004).
  1. Pictorial depth cues (e.g. linear perspective, apparent size, occlusion)
  2. Shadow (Elder, Trithart, Pintilie, & MacLean, 2004; Rensink & Cavanagh, 2004)
  3. Lighting direction (shading)
Probably non-attributes Suggested guiding features where the balance of evidence argues against inclusion on the list
  1. Intersection
  2. Optic flow
  3. Luminosity (light sources) (Correani, Scot-Samuel, & Leonards, 2006)
  4. Duration (Morgan, 2006)
  5. Color change
  6. 3-D volumes (e.g. geons)
  7. Your name
  8. Semantic category (e.g. animal, scary)

Bottom-up salience, top-down guidance, and attentional capture

Figure 1: Find a green vertical item.

When free-viewing a scene, some items or locations will tend to attract attention because of visual salience. Used in this sense, salience is a bottom-up, stimulus-driven phenomenon. An item that differs dramatically from its neighbors in one or more of the attributes in Table 1 will tend to be salient. Bottom-up salience can be modified by top-down goals of the searcher. Thus, a search for a green vertical item will cause attention to be guided to all green and all vertical items ( Figure 1). Moreover, observers report that this top-down command renders green and, perhaps, vertical items more perceptually salient (Blaser, Sperling, & Lu, 1999). Similar effects can be seen at the single cell level (e.g. Bichot & Schall, 1999).

What happens if an item of considerable bottom-up salience is not the desired target of the current visual search? If bottom-up and top-down factors are in conflict, will top-down goals prevent deployment of attention to an otherwise salient item or will that item capture attention? The voluminous literature on this topic shows that under strong top-down control, quite salient stimuli seem to be ignored (e.g. Williams, 1985) but that some stimuli, notably onsets and/or new objects, are very hard to ignore (e.g. Remington, Johnston, & Yantis, 1992; Theeuwes, 1994). As Sully said in the 19th century:

One would like to know the fortunate (or unfortunate) man who could receive a box on the ear and not attend to it.
(p.146 of Sully, 1892).

Similarity relations

Figure 2: Legend.

In general, efficiency of search decreases as the similarity between target and distractors increases, and increases as the similarity among distractors increases (Duncan & Humphreys, 1989). The most efficient searches are searches for a distinctive target amongst homogeneous distractors.

Categorical processing

Figure 3: Example of categorical processing.

Guiding features are subject to rules that govern their ability to guide. These differ from the rules that govern perception of these features (Wolfe & Horowitz, 2004). For instance, effective guidance requires differences between target and distractor that are much greater than one just noticeable difference (Nagy & Sanchez, 1990). For some attributes, perhaps for most, similarity is defined in categorical terms. That is, a target that is categorically different from distractors will be easier to find that one which is equally distant in feature space but within the same category (Daoutis, Pilling, & Davies, 2006, Wolfe, Friedman-Hill, Stewart, & O'Connell, 1992).

Spatial layout, density, crowding

The efficiency of a search will be influenced by the distribution of items across the visual field. We can identify two countervailing tendencies. As density increases, many searches, particularly simple feature searches, tend to get easier (Nothdurft, 2000; Sagi, 1990). Having a target item close to a distractor item makes it easier to notice that they are different. On the other hand, when items are close to each other, they crowd each other, making it hard to identify individual items. For instance, a letter that is just big enough to read at, say, 5 deg eccentricity, may be impossible to read if flanked by other letters (He, Cavanagh, & Intriligator, 1996; Levi, Klein, & Aitsebaomo, 1985). If crowding makes it harder to identify items, it will slow search for those items (Vlaskamp & Hooge, 2006). The specific densities producing crowding or the advantages of proximity will vary with the specific nature of the search stimuli.

Target eccentricity (distance from the fixation point) will also modulate search performance. All else being equal, targets, even large, easily identified targets, will be found more slowly as their eccentricity increases (Carrasco, Evert, Chang, & Katz, 1995).

Item and display size and the need for eye movements

As noted earlier, eye movements occur at rates far slower than inefficient covert search. As a result, search will become markedly less efficient once items become small enough to require foveation before they can be recognized. If the search display is large enough, it will be necessary to move the eyes or the head, with similar effects on search efficiency.

Search history (priming, contextual cueing effects, and repeated search)

Your ability to find a target in the current search is affected by what you have been searching for previously. In general, you are faster searching for a given target if you found that same target on a recent trial (Hillstrom, 2000). This memory for the target identity seems to go back about seven trials (Maljkovic & Nakayama, 1994).

This priming effect might occur by facilitating guidance (e.g. If I found a red vertical target, I can more effectively guide attention toward subsequent red and vertical items.) (Kristjansson, 2006). Alternatively, it might be due to memory or response facilitation (Huang, Holcombe, & Pashler, 2004). These need not be mutually exclusive effects.

The layout of previous displays can also modulate RT. In a sequence of otherwise random displays, repeated association of one spatial configuration of search items with one target location will speed search. This is known as contextual cueing (Chun & Jiang, 1998). This is a robust and long-lasting example of perceptual learning. Again, it is not entirely clear if it is due to improved guidance (e.g. If I see this display, the target must be in this location.) or some sort of response facilitation (Kunar, Flusberg, Horowitz, & Wolfe, 2007). This may be a laboratory demonstration of more general contextual guidance effects observed with natural scenes.

Interestingly, repeated search through a small set of unchanging items does not become more efficient with repetition, even over hundreds of trials (Wolfe, Klempen, & Dahlen, 2000). RT is speeded, but the slope of the RT x set size function remains constant. Apparently, using memory for an item’s location is less efficient than repeating the visual search (Kunar, Flusberg, & Wolfe, 2007). This is true if all locations are potential target locations. If targets appear in only a few locations, learning those locations will improve the efficiency of search. Thus, in a real world search for your cat, search efficiency will be improved by learning that the cat sits in only five of the possible locations in this room (Kunar, Flusberg, & Wolfe, 2007).

The mechanics of search

Rate of search

Slopes of RT x set size functions for target-present trials of simple, inefficient searches (e.g. for a letter among other letters) are on the order of 20-40 msec/item. If items were being processed one after the other in series and if items were never revisited during a search (see next section), this would seem to imply that each item takes 40-80 msec to process; meaning that 12-25 items would be processed each second. (Why? In the serial, self-terminating search described here, observers will need to search through an average of \((N+1)/2 \) items in order to find the target.).

A problem arises here because estimates of the minimum time required to recognize a single object are almost always greater than 100 msec. One solution is to assume that multiple items (perhaps all items) are processed in parallel (Palmer, 1995). A somewhat different approach notes that the slope of the RT x set size function is a measure of throughput but not necessarily a measure of the time required to process each item. A carwash can serve as a metaphor for this sort of pipeline process. It might take three minutes to wash each car, but the next car does not have to wait for the first car to be completely washed before entering. The carwash’s throughput might be one car every 30 seconds. The key insight is that while only one car can enter at a time, multiple cars can be in the carwash simultaneously (Wolfe, 2003).

Memory, sampling strategies, and voluntary deployments of attention

Many models of search that propose deployment of attention to one item at a time (see section 9) have argued that each item is only attended once during a search, a process known as sampling without replacement. The phenomenon of inhibition of return (IOR) was often invoked as a mechanism for this memory. IOR refers to the finding that it is harder to direct attention to a recently attended location or object (Posner & Cohen, 1984; for a review see Klein, 2000). In principle, IOR could prevent deployment of attention to rejected distractors (Klein, 1988). In practice, however, search slopes are, at most, modestly affected when IOR is disrupted (Horowitz & Wolfe, 1998). If IOR has a role in search, it seems more likely that it is something like a foraging facilitator (Klein & MacInnes, 1999), keeping search away from a few recently visited items but not tagging every rejected distractor in a search (but see Hooge, Over, van Wezel, & Frens, 2005). One implication of the failure to tag each item is that the actual sampling rate may be even faster than the 12-25 items/second rate noted in section 6a.

One simple way to prevent repeated deployments of attention to rejected distractors would be to adopt a scanning strategy (e.g. reading a display from left to right and top to bottom), which requires only a memory for the scanning plan. Volitional strategies of this sort are undoubtedly part of many complex search tasks (e.g. I have looked in the kitchen. Now I will search the bedroom.) However, evidence suggests that volitional deployments of attention are much slower than automatic deployments (Wolfe, Alvarez, & Horowitz, 2000). Volitional deployments appear to occur at a rate similar to saccadic eye movements; this may not be a coincidence. Eye movements in complex searches do appear to be guided by such strategies (Gilchrist & Harvey, 2006). Real world searches may well be combinations of relative slow strategic choices and much faster, but more chaotic search of the local neighborhood.

Search termination

It is easy enough to decide when to terminate a successful visual search. You can quit when you have found the target. When do you abandon an unsuccessful search? The obvious answer is that you can declare the target to be absent when you have rejected every distractor object. However, as noted in the previous section, we do not have perfect memory for rejected distractors, making it difficult to determine when this point has been reached. Moreover, other properties of the data (e.g. RT distributions) argue against an exhaustive search. Observers appear to set a quitting threshold in an adaptive manner based on whatever information they can glean from preceding trials (e.g. I got the last absent trial correct. Perhaps I could go a little faster on the next trial.). It is difficult to model this behavior for situations in which observers search similar displays for hundreds of trials. It is daunting to contemplate how unsuccessful searches are terminated under real-world conditions.

An interesting special case occurs when targets are rare. Low target prevalence is a feature of important search tasks like airport baggage screening and routine medical screening. In the lab, low prevalence puts strong pressure on observers to make target absent responses. In turn, this shift in criterion (using the term in its signal detection theory sense) will increase miss errors and decrease false alarm errors. This is a potential source of trouble in tasks that are put in place to detect important rare events (Wolfe et al., 2007).

Neural basis of visual search

Attention in the brain

How does the brain perform search tasks? There is a substantial literature addressing this question using electrophysiological and brain imaging methods in humans and non-human primates (reviewed in Reynolds & Chelazzi, 2004). Effects of attention are widespread in visual cortex and extend down to the earliest stages of primary visual cortex and, in some hands, further down to the lateral geniculate nucleus of the thalamus. These effects on early visual processing stage appear to involve feedback from later stages in processing. Initial processing of a stimulus may not be modulated by attention. With a couple of hundred msec of stimulus onset, response to the same stimulus in the same area of the visual cortex can show the effects of attention.

At the single cell level, attentional modulation can take many forms. Response can be modulated based on the features or the locations of stimuli. Responses can become larger or more sharply tuned. Attention can improve signal-to-noise ratios. Receptive fields can shift in space. While this seems complex, there is no reason to imagine that attention should have a single neural signature. Attention is used in multiple ways. It has numerous behavioral consequences. It is reasonable that it should have a variety of physiological effects.

Attentional control signals appear to arise from a fronto-parietal network which directs both covert and overt attentional shifts (Corbetta, 1998). Parietal lesions often produce symptoms of hemineglect, a neurological condition where observers can see but have great difficulty directing attention and/or action toward the visual field opposite to the side of the lesion (right parietal lesions produce neglect of the left visual field) (Mort & Kennard, 2003).

The salience map

The neural locus of the salience map is a matter of some controversy. Li (2002) has proposed that primary visual cortex (V1) is the locus of a bottom-up salience map, based on modeling and psychophysical results. Kusunoki, Gottlieb, & Goldberg (2000) claim to have identified a bottom-up salience map in lateral intraparietal area (LIP), on the basis of single-unit recordings in monkeys. Brain regions which seem to integrate both top-down and bottom-up salience include frontal eye fields (FEF, Thompson, Bichot, & Sato, 2005) and medial temporal cortex (MT, Treue & Martinez-Trujillo, 2006). It seems likely that there is no single salience map in the brain, but rather a network of maps for different tasks, which may compete or cooperate with one another depending on task demands.

Top-down vs. bottom-up networks

The fronto-parietal network is particularly associated with top-down control (Corbetta & Shulman, 2002). Attentional capture, for example, appears to be the consequence of frontal deactivation (Lavie & de Fockert, 2006). Corbetta and Shulman argued that capture represented a “circuit-breaker” on the fronto-parietal network, enabling attention to be directed towards important events that are not part of the organisms current goals. They located this circuit breaker in a more ventral network (Corbetta & Shulman, 2002).

Feature binding

Treisman and Gelade (1980) established the binding problem as one of the fundamental issues in attention and search. While the visual system analyzes stimuli into their component features, we experience holistic objects. Where and how does this synthesis occur? Neuropsychological evidence suggests that the parietal lobe is important for feature binding. Bilaterial parietal lesions can produce Bálint's syndrome. Bálint's patients exhibit "simultagnosia", an inability to perceive more than one object at a time. As with neglect, this is not a visual sensory problem. The patients can see objects at locations throughout the visual field. However, in the presence of multiple objects, attention is fixed on one to the apparent exclusion of awareness of any others (Driver, 1998). Converging evidence from other neuroscientific techniques supports this conclusion (Humphreys, Hodsoll, Olivers, & Yoon, 2006). How binding is achieved remains controversial. The leading contenders are temporal binding, via synchronous oscillations, or place coding (for a review, see Treisman, 1999)

Eye movements and visual search

Much of the work on basic search processes has either ignored eye movements, or controlled them. This does not necessarily undermine the validity of these studies. Measures of eye movements and RTs in search are highly correlated, and enforcing fixation does not change the pattern of results (Klein & Farrell, 1989; Zelinsky & Sheinberg, 1997). However, eye movements play an important role in search of complex scenes, where many important details cannot be resolved in the periphery. Furthermore, since eye movements can be observed directly, unlike shifts of covert attention, they provide a rich dataset to improve our understanding of search.

Note that there are two basic categories of eye movements: saccades and smooth pursuit. Saccades are rapid, ballistic movements that shift gaze from one point to another. Smooth pursuit movements follow the motion of an object. With a few exceptions (e.g. Khurana & Kowler, 1987; Morvan & Wexler, 2005), the literature on eye movements in visual search is concerned with saccades. Analysis of the distribution of saccades and saccadic latencies has contributed a great deal to our understanding of search. Saccades show evidence of both top-down (Chen & Zelinsky, 2006; Pomplun, 2006) and bottom-up (Sobel & Cave, 2002) guidance. Eye movement studies have also been used to demonstrate new forms of search guidance, such as guidance by scene context (Neider & Zelinsky, 2006a; Torralba, Oliva, Castelhano, & Henderson, 2006). Space limitations preclude a detailed review of this literature. Excellent reviews of the role of eye movements in search are available elsewhere (Findlay & Gilchrist, 2005; Henderson & Ferreira, 2004).

Studies of eye movements have also been used to shed light on the question of memory, or sampling strategy in visual search (see section 5.2). When objects are very small and sparse, requiring foveation, perfect memory can be demonstrated (Peterson, Kramer, Wang, Irwin, & McCarley, 2001). Under other circumstances, IOR may serve to discourage fixations on recently fixated items (Boot, McCarley, Kramer, & Peterson, 2004). The eyes do revisit examined locations (Gilchrist & Harvey, 2000; Gilchrist, North, & Hood, 2001; Hooge, Over, van Wezel, & Frens, 2005) suggesting a small but potentially useful memory for eye movements in search (McCarley, Wang, Kramer, Irwin, & Peterson, 2003). That memory supplemented by by deliberate scanning strategies (Gilchrist & Harvey, 2006).

Search in complex stimuli

The great bulk of work on visual search discussed has used simple stimuli presented on computer monitors. The hope and assumption is that the rules that apply in the lab will also apply in the world.

Search in scenes

One might ask why researchers have resorted to such artificial stimuli when our interest is in how observers find real objects in real scenes. It is worth noting just a few of the daunting methodological issues. If one wants to ask about searches for red vertical lines amongst green vertical and red horizontal distractors, it is straight-forward to present hundreds of trials with targets and countable numbers of distractors in random locations. If one wants to ask about searches for coffee makers in kitchens, none of this is simple. Coffee makers cannot be placed randomly in real kitchens. If we ask repeatedly about this one kitchen, we have changed the question. We cannot easily generate arbitrary numbers of real kitchens (though this is easier if we opt for realistic kitchens drawn with architectural software). We do not know how to count the number of objects. Is the stove an object? Is every knob on the stove an object? If not, why not? Of course, much interesting and important work has been done with scene stimuli (Brockmole & Henderson, 2006; Eckstein, Drescher, & Shimozaki, 2006; Henderson & Hollingworth, 1999; Hidalgo-Sotelo, Oliva , & Torralba, 2005; Neider & Zelinsky, 2006b) much of it in the eye movement literature, referenced above (Henderson & Ferreira, 2004). The spatial layout of scenes undoubtedly guides the deployment of attention. We look for coffee makers on surfaces that are likely to hold such objects (Torralba, Oliva, Castelhano, & Henderson, 2006). However, guidance by scene context may be qualitatively different from guidance by attributes like color.

Applied search tasks

Modern civilization has created many specialized search tasks: Examination of bridges for metal fatigue, airport security, air traffic control, and so on. Each has its own specific challenges; for instance, a different balance of the relative costs of miss vs. false alarm errors.

Analysis of medical images (notably x-rays) has been the subject of one of the more extensive literatures in applied visual search. (e.g. Berbaum et al., 1998; Eckstein, Pham, Abbey, & Zhang, 2006; Judy, Swensson, & Szulc, 1981; Krupinski, 2005; Kundel, 1991). Space does not permit an extensive review of this topic. One of the challenges in medical image search is that the number of targets is often unknown and it can be important to find every target (e.g. every tumor). Thus, in this literature, there is considerable interest in the phenomenon of satisfaction of search, the situation where otherwise detectable targets are not found because other targets were found first (Berbaum et al., 1990). Observers are satisfied and terminate search. This is a version of the search termination problem described earlier; a version with important consequences.

Modeling approaches

Logan (2004) has provided a recent review of some modeling efforts in visual search and attention, more generally. Here, we briefly mention a few of the leading efforts.

Feature Integration Theory (FIT)

Much of the work in this field is built on the foundation of Treisman’s seminal Feature Integration Theory (Treisman & Gelade, 1980). In its original form, FIT held that a set of basic features could be processed in parallel, across the visual field in a preattentive stage. Other visual stimuli including conjunctions of basic features could not be identified unless selected by attention in a serial manner. In particular, FIT held that attention was required if two or more features were to be bound into a coherent percept.

Guided Search (GS)

Guided Search is an intellectual heir of FIT. It holds that basic features, derived from the early, parallel stages of processing, can be used to guide the subsequent deployment of attention. In this manner, a conjunction of two features can be found quite efficiently by guiding attention to the intersection of the sets of items possessing each feature (Wolfe, 1994, 2007). The present article describes many search phenomena in terms influenced by GS.

Computational

There are a variety of recent computational models that can be seen as broadly in this FIT/GS theoretical tradition, assuming that early visual process control subsequent attentional selection. A non-exhaustive list would include the work of Itti and colleagues (Itti & Koch, 2001), Hamker (2004), and Tsotsos (J K Tsotsos et al., 1995).

Neuronal

Many models (including many of the aforementioned) are grounded in neurophysiological as well as psychophysical work. Early models had a feed-forward structure where early visual processes were not influenced by attention. Explicitly neuronal models once tended to describe attention as a filter or gate on the path from input to perception. More recent efforts tend to model attentional effects as feedback from front-parietal loci on to the earlier stages of visual processing (Reynolds & Chelazzi, 2004). Desimone and Duncan's Biased Competition model describes the effects of attention at the neuronal level. When multiple stimuli have the potential to influence the response of a given neuron, they compete for control of the output of that neuron. As the theory’s name suggests, attention acts to bias that competition in favor of some stimuli over others (Desimone & Duncan, 1995).

Signal detection

Signal detection models have been able to provide quite precise accounts of the rules governing relatively simple searches (e.g. search for a line of one orientation among distractors of another with all lines embedded in visual noise). These models are characteristically parallel in nature, assuming that all items are processed at once. Adding distractors adds noisy signals that might be mistaken for a target, thus degrading performance (Verghese, 2001).

The plethora of models are not as inconsistent with each other as they might appear. When a parallel signal detection model allows cues to modulate how much attention is directed to an item, it has moved closer to a FIT or GS-style account of selection. When an FIT or GS-style model allows for multiple items to be selected at the same time, it has blurred the distinction between serial and parallel stages. None of these models is inconsistent with the biased competition notion that stimuli compete for access to cells that could process any of them, but not all at the same time. Creation of the true model of search does not require commitment to the correct school of modeling. It requires getting the details right.

References

  • Berbaum, K. S., Franken, E. A., Jr., Dorfman, D. D., Miller, E. M., Caldwell, R. T., Kuehn, D. M., et al. (1998). Role of faulty visual search in the satisfaction of search effect in chest radiography. Acad Radiol, 5(1), 9-19. doi:10.1016/S1076-6332(98)80006-8.
  • Berbaum, K. S., Franken, E. A., Jr., Dorfman, D. D., Rooholamini, S. A., Kathol, M. H., Barloon, T. J., et al. (1990). Satisfaction of search in diagnostic radiology. Invest Radiol, 25(2), 133-140.
  • Bichot, N. P., & Schall, J. D. (1999). Effects of similarity and history on neural mechanisms of visual selection. Nature Neuroscience, 2(6), 549-554. doi:10.1038/9205.
  • Birnkrant, R. S., Wolfe, J. M., Kunar, M., & Sng, M. (2004, April 29 - May 4, 2004). Is shininess a basic feature in visual search? Paper presented at the Visual Sciences Society, Sarasota, FL. doi:10.1167/4.8.678.
  • Blaser, E., Sperling, G., & Lu, Z. L. (1999). Measuring the amplification of attention. Proc Natl Acad Sci U S A, 96(20), 11681-11686. doi:10.1073/pnas.96.20.11681.
  • Brockmole, J. R., & Henderson, J. M. (2006). Using real-world scenes as contextual cues for search. Visual Cognition, 13(1), 99-108. doi:10.1080/13506280500165188.
  • Boot, W. R., McCarley, J. S., Kramer, A. F., & Peterson, M. S. (2004). Automatic and intentional memory processes in visual search. Psychon Bull Rev, 11(5), 854-861. doi:10.3758/BF03196712.
  • Carrasco, M., Evert, D. L., Chang, I., & Katz, S. M. (1995). The eccentricity effect: Target eccentricity affects performance on conjunction searches. Perception and Psychophysics, 57(8), 1241-1261. doi:10.3758/BF03208380.
  • Chen, X., & Zelinsky, G. J. (2006). Real-world visual search is dominated by top-down guidance. Vision Research, 46(24), 4118-4133. doi:10.1016/j.visres.2006.08.008.
  • Corbetta, M. (1998). Frontoparietal cortical networks for directing attention and the eye to visual locations: identical, independent, or overlapping neural systems? Proceedings of the National Academy of Sciences of the United States of America, 95(3), 831-838. doi:10.1073/pnas.95.3.831.
  • Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci, 3(3), 201-215. doi:10.1038/nrn755.
  • Correani, A., Scot-Samuel, N., & Leonards, U. (2006). Luminosity - a perceptual "feature" of light-emitting objects? Vision Res, 46(22), 3915-3925. doi:10.1016/j.visres.2006.05.001.
  • Daoutis, C. A., Pilling, M., & Davies, I. R. L. (2006). Categorical effects in visual search for colour. . Visual Cognition, 14(2), 217-240. doi:10.1080/13506280500158670.
  • Di Lollo, V., Enns, J. T., & Rensink, R. A. (2000). Competition for consciousness among visual events: the psychophysics of reentrant visual pathways. Journal of Experimental Psychology: General, 129(3), 481-507.
  • Doi, H., & Ueda, K. (2007). Searching for a perceived stare in the crowd Perception, 36(5), 773-780. doi:10.1068/p5614.
  • Driver, J. (1998). The neuropsychology of spatial attention. In H. Pashler (Ed.), Attention (pp. 297-340). Hove, East Sussex, UK: Psychology Press Ltd.
  • Dukewich, K. & Klein, R. M. (2005) Implications of search accuracy for serial self-terminating models of search. Visual Cognition., 12, 1386-1403. doi:10.1080/13506280444000788.
  • Eckstein, M. P., Pham, B. T., Abbey, C. K., & Zhang, Y. (2006). The efficacy of reading around learned backgrounds. Proc. SPIE 6146, 6146ON.
  • Elder, J. H., Trithart, S., Pintilie, G., & MacLean, D. (2004). Rapid processing of cast and attached shadows. Perception, 33(11), 1319-1338. doi:10.1068/p5323.
  • Gilchrist, I. D., & Harvey, M. (2000). Refixation frequency and memory mechanisms in visual search. Current Biology, 10(19), 1209-1212. doi:10.1016/S0960-9822(00)00729-6.
  • Gilchrist, I. D., & Harvey, M. (2006). Evidence for a systematic component within scan paths in visual search. Visual Cognition, 14(4), 704-715. doi:10.1080/13506280500193719.
  • Gilchrist, I. D., North, A., & Hood, B. (2001). Is visual search really like foraging? Perception, 30(12), 1459-1464. doi:10.1068/p3249.
  • Goldsmith, M. (1998). What's in a location? Comparing object-based and space-based models of feature integration in visual search. J. Experimental Psychology: General, 127(2), 189-219.
  • Hahn, S., Carlson, C., Singer, S., & Gronlund, S. D. (2006). Aging and visual search: automatic and controlled attentional bias to threat faces. Acta Psychol (Amst), 123(3), 312-336. doi:10.1016/j.actpsy.2006.01.008.
  • He, S., Cavanagh, P., & Intriligator, J. (1996). Attentional resolution and the locus of visual awareness. Nature, 383(26 Sept 1996), 334-337. doi:10.1038/383334a0.
  • Henderson, J. M., & Ferreira, F. (2004). Scene perception for psycholinguists. In J. M. Henderson & F. Ferreira (Eds.), The interface of language, vision, and action: Eye movements and the visual world (pp. 1-58). New York: Psychology Press.
  • Hershler, O., & Hochstein, S. (2006). With a careful look: Still no low-level confound to face pop-out. Vision Res, 46(18), 3028-3035. doi:10.1016/j.visres.2006.03.023.
  • Hidalgo-Sotelo, B., Oliva , A., & Torralba, A. (2005). Human Learning of Contextual Priors for Object Search: Where does the time go? Paper presented at the Proceedings of the 3rd workshop on Attention and Performance in Computer Vision at the Int. CVPR. 2005.
  • Hillstrom, A. P. (2000). Repetition effects in visual search. Perception and Psychophysics, 62(4), 800-817. doi:10.3758/BF03206924.
  • Hooge, I. T., Over, E. A., van Wezel, R. J., & Frens, M. A. (2005). Inhibition of return is not a foraging facilitator in saccadic search and free viewing. Vision Research, 45(14), 1901-1908. doi:10.1016/j.visres.2005.01.030.
  • Horowitz, T. S., & Wolfe, J. M. (1998). Visual search has no memory. Nature, 394(Aug 6), 575-577. doi:10.1038/29068.
  • Huang, L., Holcombe, A. O., & Pashler, H. (2004). Repetition priming in visual search: episodic retrieval, not feature priming. Mem Cognit, 32(1), 12-20. doi:10.3758/BF03195816.
  • Humphreys, G. W., Hodsoll, J., Olivers, C. N. L., & Yoon, E. Y. (2006). Contributions from cognitive neuroscience to understanding functional mechanisms of visual search. Visual Cognition, 14(4), 832-850. doi:10.1080/13506280500195516.
  • Itti, L., & Koch, C. (2001). Computational modelling of visual attention. Nature Reviews of Neuroscience, 2(3), 194-203. doi:10.1038/35058500.
  • Judy, P. F., Swensson, R. G., & Szulc, M. (1981). Lesion detection and signal-to-noise ratio in CT images. Med Phys, 8(1), 13-23. doi:10.1118/1.594903.
  • Khurana, B., & Kowler, E. (1987). Shared attentional control of smooth eye movement and perception. Vision Research, 27(9), 1603-1618. doi:10.1016/0042-6989(87)90168-4.
  • Klein, R. M.(1988). Inhibitory tagging system facilitates visual search. Nature, 334, 430-431. doi:10.1038/334430a0.
  • Klein, R. M., & Farrell, M. (1989). Search performance without eye movements. Perception & Psychophysics, 46(5), 476-482. doi:10.3758/BF03210863.
  • Klein, R. M., & MacInnes, W. J. (1999). Inhibition of return is a foraging facilitator in visual search. Psychological Science, 10(July), 346-352. doi:10.1111/1467-9280.00166.
  • Kowler, E., Anderson, E., Dosher, B., & Blaser, E. (1995). The role of attention in the programming of saccades. Vision Research, 35(13), 1897-1916. doi:10.1016/0042-6989(94)00279-U.
  • Kristjansson, A. (2006). Simultaneous priming along multiple feature dimensions in a visual search task. Vision Res, 46(16), 2554-2570.
  • Krupinski, E. A. (2005). Visual search of mammographic images: influence of lesion subtlety. Acad Radiol, 12(8), 965-969. doi:10.1016/j.acra.2005.03.071.
  • Kunar, M. A., Flusberg, S. J., Horowitz, T. S., & Wolfe, J. M. (2007). Does Contextual Cueing Guide the Deployment of Attention? J Exp Psychol Hum Percept Perform, 33(4), 816-828.
  • Kunar, M. A., Flusberg, S. J., & Wolfe, J. (2007). The Role of Memory and Restricted Context in Repeated Visual Search Percept Psychophys, in press.
  • Kundel, H. L. (1991). Search for lung nodules: The guidance of visual scanning. Investigative Radiology, 266, 777-787.
  • Kusunoki, M., Gottlieb, J., & Goldberg, M. E. (2000). The lateral intraparietal area as a salience map: The representation of abrupt onset, stimulus motion, and task relevance. Vision Research, 40(10), 1459-1468. doi:10.1016/S0042-6989(99)00212-6.
  • Lamme, V. A., & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends Neurosci, 23(11), 571-579. doi:10.1016/S0166-2236(00)01657-X.
  • Levi, D. M., Klein, S. A., & Aitsebaomo, A. P. (1985). Vernier acuity, crowding and cortical magnification. Vision Research, 25, 963-977. doi:10.1016/0042-6989(85)90207-X.
  • Maljkovic, V., & Nakayama, K. (1994). Priming of popout: I. Role of features. Memory & Cognition, 22(6), 657-672. doi:10.3758/BF03209251.
  • McCarley, J. S., Wang, R. F., Kramer, A. F., Irwin, D. E., & Peterson, M. S. (2003). How much memory does oculomotor search have? Psychological Science, 14(5), 422-426. doi:10.1111/1467-9280.01457.
  • Morgan, M. J. (2006). The inefficiency of visual search for a target differing in duration is not explained by memory loss ECVP abs.
  • Mort, D. J., & Kennard, C. (2003). Visual search and its disorders. Curr Opin Neurol, 16(1), 51-57. doi:10.1097/00019052-200302000-00007.
  • Morvan, C., & Wexler, M. (2005). Reference frames in early motion detection. Journal of Vision, 5(2), 131-138. doi:10.1167/5.2.4.
  • Nagy, A. L., & Sanchez, R. R. (1990). Critical color differences determined with a visual search task. J. Optical Society of America - A, 7(7), 1209-1217. doi:10.1364/JOSAA.7.001209.
  • Neider, M. B., & Zelinsky, G. J. (2006a). Scene context guides eye movements during visual search. Vision Research, 46(5), 614-621. doi:10.1016/j.visres.2005.08.025.
  • Neider, M. B., & Zelinsky, G. J. (2006b). Searching for camouflaged targets: Effects of target-background similarity on visual search. Vision Res, 46(14), 2217-2235. doi:10.1016/j.visres.2006.01.006.
  • Ostrovsky, Y., Cavanagh, P., & Sinha, P. (2004). Perceiving illumination inconsistencies in scenes. Perception, 34, 1301-1314. doi:10.1068/p5418.
  • Palmer, J. (1995). Attention in visual search: Distinguishing four causes of a set size effect. Current Directions in Psychological Science, 4(4), 118-123. doi:10.1111/1467-8721.ep10772534.
  • Pashler, H., Dobkins, K., & Huang, L. (2004). Is contrast just another feature for visual selective attention? Vision Res, 44(12), 1403-1410. doi:10.1016/j.visres.2003.11.025.
  • Peterson, M. S., Kramer, A. F., Wang, R. F., Irwin, D. E., & McCarley, J. S. (2001). Visual search has memory. Psychological Science, 12(4), 287-292. doi:10.1111/1467-9280.00353.
  • Remington, R. W., Johnston, J. C., & Yantis, S. (1992). Involuntary attentional capture by abrupt onsets. Perception and Psychophysics, 51(3), 279-290. doi:10.3758/BF03212254.
  • Rensink, R. A., & Cavanagh, P. (2004). The influence of cast shadows on visual search. Perception, 33(11), 1339-1358. doi:10.1068/p5322.
  • Roggeveen, A. B., Kingstone, A., & Enns, J. T. (2004). Influence of inter-item symmetry in visual search. Spat Vis, 17(4-5), 443-464. doi:10.1163/1568568041920159.
  • Saalmann, Y. B., Pigarev, I. N., & Vidyasagar, T. R. (2007). Neural Mechanisms of Visual Attention: How TopDown Feedback Highlights Significant Locations Science, MS. doi:10.1126/science.1139140.
  • Sagi, D. (1990). Detection of an orientation singularity in Gabor textures: Effect of signal density and spatial-frequency. Vision Research, 30(9), 1377-1388. doi:10.1016/0042-6989(90)90011-9.
  • Sobel, K. V., & Cave, K. R. (2002). Roles of salience and strategy in conjunction search. Journal of Experimental Psychology: Human Perception & Performance, 28(5), 1055-1070. doi:10.1037/0096-1523.28.5.1055.
  • Theeuwes, J. (1994). Stimulus-driven capture and attentional set: selective search for color and visual abrupt onsets. Journal of Experimental Psychology: Human Perception and Performance, 20(4), 799-806. doi:10.1037/0096-1523.20.4.799.
  • Thompson, K. G., Bichot, N. P., & Sato, T. R. (2005). Frontal Eye Field Activity Before Visual Search Errors Reveals the Integration of Bottom-Up and Top-Down Salience. Journal of Neurophysiology, 93(1), 337-351. doi:10.1152/jn.00330.2004.
  • Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychological Review, 113(4), 766-786. doi:10.1037/0033-295X.113.4.766.
  • Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features on object search. Psychological Review, 113(4), 766-786. doi:10.1037/0033-295X.113.4.766.
  • Treisman, A. (1999). Solutions to the binding problem: progress through controversy and convergence. Neuron, 24(1), 105-110, 111-125. doi:10.1016/S0896-6273(00)80826-0.
  • Treue, S., & Martinez-Trujillo, J. C. (2006). Visual search and single-cell electrophysiology of attention: Area MT, from sensation to perception. Visual Cognition, 14(4), 898-910. doi:10.1080/13506280500197256.
  • Tsotsos, J. K., Culhane, S. N., Wai, W. Y. K., Lai, Y., Davis, N., & Nuflo, F. (1995). Modeling visual attention via selective tuning. Artificial Intelligence, 78, 507-545. doi:10.1016/0004-3702(95)00025-9.
  • van Zoest, W., Giesbrecht, B., Enns, J. T., & Kingstone, A. (2006). New reflections on visual search: Interitem symmetry matters! Psychol Sci, 17(6), 535-542. doi:10.1111/j.1467-9280.2006.01740.x.
  • Vanrullen, R. (2006). On second glance: Still no high-level pop-out effect for faces. Vision Res, 46(18), 3017-3027
  • Williams, L. J. (1985). Tunnel vision induced by a foveal load manipulation. Human Factors, 27(2), 221-227.
  • Wolfe, J., Alvarez, G., & Horowitz, T. (2000). Attention is fast but volition is slow. Nature, 406, 691. doi:10.1038/35021132.
  • Wolfe, J. M. (1994). Guided Search 2.0: A revised model of visual search. Psychonomic Bulletin and Review, 1(2), 202-238. doi:10.3758/BF03200774.
  • Wolfe, J. M. (1998). What do 1,000,000 trials tell us about visual search? Psychological Science, 9(1), 33-39. doi:10.1111/1467-9280.00006.
  • Wolfe, J. M. (2007). Guided Search 4.0: Current Progress with a model of visual search. In W. Gray (Ed.), Integrated Models of Cognitive Systems (pp. 99-119). New York: Oxford.
  • Wolfe, J. M., Friedman-Hill, S. R., Stewart, M. I., & O'Connell, K. M. (1992). The role of categorization in visual search for orientation. J. Exp. Psychol: Human Perception and Performance, 18(1), 34-49. doi:10.1037/0096-1523.18.1.34.
  • Wolfe, J. M., & Horowitz, T. S. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5(6), 495-501. doi:10.1038/nrn1411.
  • Wolfe, J. M., Horowitz , T. S., Van Wert, M. J., Kenner, N. M., Place, S. S., & Kibbi, N. (2007). Low target prevalence is a stubborn source of errors in visual search tasks. JEP: General, accepted for publication, Feb '07.
  • Wolfe, J. M., Klempen, N., & Dahlen, K. (2000). Post-attentive vision. Journal of Experimental Psychology:Human Perception & Performance, 26(2), 693-716. doi:10.1037/0096-1523.26.2.693.
  • Zelinsky, G. J., & Sheinberg, D. L. (1997). Eye movements during parallel-serial visual search. Journal of Experimental Psychology: Human Perception & Performance, 23(1), 244-262. doi:10.1037/0096-1523.23.1.244.

Internal references


External links

See also

Binding Problem, Eye Movements, Inhibition of Return, Vision, Visual Attention, Visual Salience, Gestalt principles

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools