|Astrid M.L. Kappers and Wouter M. Bergmann Tiest (2015), Scholarpedia, 10(4):32734.||doi:10.4249/scholarpedia.32734||revision #151599 [link to/cite this article]|
Humans can distinguish many object and surface properties by touch. Some of these properties are highly salient: they are immediately perceived after just a brief touch. Since the processing of such features is apparently highly efficient, it is of interest to investigate which features are salient to touch and which are not. In vision, there already exists a large body of literature concerning salient features (e.g., Itti, 2007). A typical way to investigate visual saliency is by means of a search task (e.g., Treisman and Gelade, 1980; Treisman and Gormican, 1988; Wolfe and Horowitz, 2008). Here, we will describe this search task, its adaptations to haptic research and give an overview of the features that are salient to touch.
Visual search1a and b, examples are given of a red disk among blue disks. In both examples, the red disk is seen immediately; finding the red disk does not depend on the number of blue distractor items in the direct neighbourhood. On the other hand, finding the letter “o” among the distracting letters “c” as in Figure 1d and e, is much harder and clearly depends on the number of distractors. In a typical visual search task, participants are confronted with a series of displays like in this figure (with one type of stimuli) and they have to decide as quickly as possible whether or not the target item is present, without making too many errors. A measure for the saliency of the target item property (in these examples “red” or “opening”, respectively) is the so-called search slope, which is the slope of the line fitted through the response times plotted against the total number of items. If the property is salient, the search slope will be rather flat, as the response times hardly depend on number of distractors (see Figure 1c); the target item is said to “pop-out” and the search is termed “parallel”. In case of a less salient target, the response times increase with number of distractors (see Figure 1f) and the search is termed “serial”; participants have to go over each item one by one until the target is found, or, in case the target is not present, all items have been inspected.
Although the search slope gives a good indication of the saliency of the target property, it should be noted that the division between parallel and serial search is not clear-cut (Wolfe, 1998): the (relative) saliency of a property depends on the context. For example, the property “red colour” might be salient among blue items, it will be much less so among orange items. Therefore, search slopes can be anything between very flat and very steep. The important message is that in some conditions, colour can be salient and thus colour must be processed very fast or very early on in the visual system.
Haptic search for saliency
In this section, we will first describe the various methods that have been introduced to make the visual search paradigm suitable for touch experiments. In the subsections that follow, we will discuss the results obtained with these methods for the various stimulus properties that have been studied.
Lederman and colleagues (1988) were the first to adapt the visual search paradigm to the sense of touch. Stimuli could be presented to the index, middle and ring fingers of both hands. Like in the visual search tasks, blindfolded participants were required to decide as quickly as possible whether a stimulus with a certain property or combination of properties was present or not. In an extensive follow-up study using a sophisticated device (see Moore et al. (1991) for a description), Lederman and Klatzky (1997) investigated a large number of stimulus properties, such as roughness, hardness, temperature, surface discontinuity and surface orientation. From trial to trial the number of stimulated fingers varied from 1 to 6, with at most one target present. The fingers could make small movements, but basically remained in place. They used the search slopes as outcome measure.
A key feature of haptic perception is that it is active and that may be essential for optimal performance. Therefore, Plaisier et al. (2008a) created a dynamic version of the search task, first in two dimensions and subsequently also in three. In two dimensions, displays like those in vision, were produced with smooth and rough patches of sandpaper as target and distractor (in either way). Participants were asked to sweep their hand over the display in search for a target and again decide as fast as possible whether the target was present. Depending on the difference between the roughnesses of target and distractors, movement patterns were either simple (see Movie in Figure 2) or complex (see Movie in Figure 3), and this was used as an additional measure for distinguishing parallel from serial searches.
In a three-dimensional version of the active haptic search task, Plaisier et al. (2008b) asked participants to grasp a bunch of items, such as spheres, cubes or ellipsoids. Again, a target could either be present or not. With easily discriminable features, a single grasp sufficed, but in harder discrimination conditions, the items had to be explored one by one and occasionally be thrown out of the hand (see Movie in Figure 4). As release of items is a clear indication of serial search, the percentage of trials with released items was taken as an additional measure for saliency.
Another way to address the haptic saliency of features is to investigate what can be observed in a very brief touch. In vision, salient properties pop-out and can be observed in just a glance. Klatzky and Lederman (1995) investigated the recognition of objects in what they termed a "haptic glance", a brief touch of about 200 ms. Features that can be observed within such a short period pop-out and need to be processed early on in the haptic system.
In the experiments of Lederman and Klatzky (1997), the target could be either rough or smooth and the difference between the roughnesses of target and distractor could be either small or large. In the easy condition with the rough target, they found very shallow search slopes. These were slightly higher when the target was smooth. In the difficult condition, all search slopes were very steep, indicating that participants had to examine the items one at a time. The important finding was that it was possible to create a condition with very flat search slopes, indicating that roughness could be processed fast. The saliency of roughness depends, as always, on the relative difference with the distractors.
Examples of the displays used by Plaisier et al. (2008a) are shown in Figure 5. Although their experimental set-up and procedure were quite different from those of Lederman and Klatzky (1997), they obtained similar results: very flat search slopes for the rough target among smooth distractors, somewhat higher slopes for the smooth target among rough distractors and much steeper slopes if the difference between target and distractor was smaller.
They also measured and analysed the movement patterns of the participants by placing an infra-red LED on the index finger of the exploring hand and recording its position with an NDI Optotrak Certus system. Schematic examples of such movement tracks can be seen in Figure 6. In easy searches, a single hand sweep over the display was sufficient to determine whether or not a target was present. Note that in the example of Figure 6a the tip of the index finger did not even touch the target; touching the target item with the palm of the hand was sufficient to make the decision. In difficult searches, the items had to be explored one by one by the tip of the fingers, as can clearly be seen in Figure 6b. As a consequence, the length of the trajectory was shorter in easy searches and movement speed was higher. Based on these findings, flat search slopes and simple movement tracks, Plaisier and colleagues concluded that roughness is an object property that might “pop-out” and thus is a salient feature for touch.
Allowing participants just a haptic glance, Klatzky and Lederman (1995) investigated whether familiar objects could be recognized within 200 ms. When participants did not get any prior information about the object that would be presented, they were still able to recognize 15 % of the objects. Performance increased when participants were given prior information about the kind of object (47 % correct) or if they only had to decide whether or not the stimulus was a certain named object (74 % correct). One of their conclusions was that texture, of which roughness was one of the aspects, could be extracted within a haptic glance.
In the search experiments of Lederman and Klatzky (1997) where stimuli were pressed to the fingers, a small raised bar was used as edge. Its orientation could be either horizontal or vertical (that is, perpendicular or aligned with the finger, respectively). If the presence of such an edge had to be detected among flat surfaces or vice versa, search slopes were relatively shallow. Also other types of edges, like a cylindrical hole in a flat surface, led to good performance. Although these response times were somewhat higher than in their roughness experiments, the conclusion still had to be that abrupt-surface discontinuities or edges are processed early on in the haptic system. However, if a specific orientation of an edge (horizontal or vertical) served as target among distractors with a perpendicular orientation, performance was much slower. Therefore, relative orientation must be processed much later in the system.
In the experiments of Plaisier et al. (2009a) participants had to grasp bunches of items with or without edges, such as cubes and spheres (see Figure 7 and Movie in Figure 4). Especially for the detection of a cube or a tetrahedron among spheres, search slopes were very flat. Also the reversed situation of a sphere among cubes led to fast performance. In these experiments, participants were allowed to remove items from their hand if they thought that this would speed up their performance. Release of items is a clear indication that the search could not be performed in parallel, and therefore the percentage of trials in which at least one item is removed, is another measure for the search to be parallel or serial. In the case of the search for a cube among spheres hardly any trials occurred in which an item was released, and when the target was a tetrahedron even no items were released. A single grasp followed by a quick release of all items was sufficient, showing indeed strong evidence for a parallel search. Prominent features of a cube and a tetrahedron in comparison to a sphere are their edges and vertices. These results indicate that edges and vertices are very salient features for touch.
The fact that edges and vertices are salient to touch can also be shown by their disrupting influence on other tasks. Using a search paradigm similar to that shown in Figure 7, Van Polanen et al. (2013) showed that it was much harder to decide whether a rough target was present among smooth distractors if the shapes were cubes instead of spheres. Thus although the shape was irrelevant to the task, the saliency of the edges and vertices distracted from the saliency of the roughness and performance deteriorated. In a completely different experimental paradigm, Panday et al. (2012) found that the presence of edges disrupted the perception of overall shape.
Pawluk and colleagues (2010; 2011) studied whether participants were able to decide within a haptic glance whether or not an object was present, that is, to segregate figure from ground. In their studies, they found that if an object moves upon touch, this greatly adds to the perception of indeed touching an object and not just the background.
Van Polanen and colleagues (2012a) investigated movability using a search task. They created displays with a varying number of ball transfer units (for examples, see Figure 8). By gluing some of the balls to their casing, they created stationary stimuli. Participants were instructed to move their hand over the display and decide as fast as possible whether the target (either a movable or a stationary) stimulus was present. Again, search slopes were one of the measures for saliency. They found that if the target was the movable stimulus, search slopes were very flat and the target “popped-out” from the distractors. In the reverse case, the response times were higher and indicated a more serial search. They also measured and analysed the movement patterns and these confirmed their conclusions from the search slopes: a movable target is salient and can be found in a single hand sweep; a stationary target among moving distractors is harder to find and requires serial exploration. This asymmetry is again an indication of the saliency of movability: if many items move, this disrupts performance.
Plaisier et al. (2009b) investigated the role of fixation of the items in a search task. Instead of stimuli attached to flexible wires as shown in Figure 7, they pulled the wires to which the cubes and spheres were fixed through small metal tubes, thereby restricting the movability of the shapes. By lowering the wire 0.5 cm, they could also create a condition which allowed the stimuli a limited amount of movability. They found the fastest response times and thus the lowest slopes in the conditions where all items were fixed. Compared to their earlier research with the flexible wires (Plaisier et al., 2009a), the slopes in the completely fixed condition were even lower (but care should be taken with this finding, as the experimental conditions were somewhat different).
Lederman and Klatzky (1997) used two types of rubber (hard and soft) to investigate the saliency of hardness. A typical movement their participants made was pressing their fingers on the surfaces. They found relatively fast performance, both with the hard and the soft stimuli as target. In the overview of their experiment, the saliency of hardness reached the second place, directly after roughness.
Van Polanen and colleagues (2012b) created a set of spheres that were made of three types of silicon rubber and therefore differed in hardness; the “hard” and “middle-soft” spheres were solid, the “soft” ones were hollow. These spheres were used in both two-dimensional (see Figure 9) and three-dimensional search tasks. In the three-dimensional task, the spheres were suspended from wires and participants had to grasp the bunch, like in the experiments described above. Especially the conditions with the hard and soft spheres as target and distractors (in either order) led to very flat search slopes. Also the very low number of items that were released from the hand in these conditions, indicated a parallel search. Conditions with the middle-soft stimuli as either target or distractor were more difficult and led to steeper search slopes.
Although these results indicated that hardness could be a salient feature, the possibility existed that the fast performance was not (only) due to the saliency of hardness, but (also) due to the weight difference; hard spheres were heavier than the soft spheres and thus a bunch of, for example, four items would be somewhat heavier if the target were present. Therefore, a two-dimensional version of this experiment was designed in which the participants had to press their hand on a static display (see Figure 9b). In this experiment, the weight of the stimuli could not play a role. Again, a hard target popped-out among soft distractors. The opposite was not the case and this was explained by the fact that due to the presence of the surrounding hard distractors, the hand was blocked from compressing the soft target, which thus could not be identified. The combination of the results of these two experiments led to the conclusion that both hardness and softness could be salient features to touch.
Lederman and Klatzky (1997) did not actually vary the temperature of their stimuli, but they used different materials that due to their different heat conductance were perceived as either warm (soft pine) or cool (copper). They found search slopes that were only slightly steeper than those for roughness and hardness. However, overall response times were higher, which should not be surprising as it takes some time before the difference in heat flow creates a temperature difference that can be observed.
Plaisier and Kappers (2010) used the brass spheres from the earlier experiments (Plaisier et al., 2009a) of which they manipulated the temperature. The target was always a sphere at room temperature (about 22 °C), while the distractors had a temperature of about 38 °C. The distractor spheres were heated by placing them on a plastic layer on a water bath of about 41 °C. The hand temperature of the participants was kept between 28 and 33 °C by having them place their hand on a plastic layer on another water bath before and in between trials. The set-up and experimental paradigm were otherwise the same as in the previous experiments. They found search slopes similar to that of tetrahedra between spheres and as participants did not release any items, this provided strong evidence for parallel search. Therefore, they concluded that coldness pops out and thus temperature can be a salient feature to touch.
Bergmann Tiest and colleagues (2012) investigated the influence of the thermal conductivity on the haptic perception of volume. Participants had to compare the size of a cube made out of a synthetic material to that of a brass cube. The results showed that at room temperature, the brass cube was perceived as significantly larger than the synthetic cube. Cooling or heating the brass cube made no difference and thus these results are independent of and therefore not caused by temperature. The brass cube has a higher thermal conductivity than the synthetic one, so the heat flow is higher in the brass cube; it was concluded that the salience of this heat flow resulted in a larger perceived size.
In their research, Lederman and Klatzky (1997) were especially interested in the order in which stimulus features were processed by the haptic system. In their set-up, stimuli were pressed to the almost static fingers. They found that material properties like roughness and hardness are almost immediately available upon contact. Abrupt surface discontinuities like edges were also processed fast, though slightly slower. All these features are most likely processed in parallel by the haptic system. Orientations of edges or surfaces require much longer exploration times and there is no indication that such information can be processed in parallel. These findings are in agreement with results obtained in more active exploration conditions: roughness (Plaisier et al., 2008a; van Polanen et al., 2013), hardness (van Polanen et al., 2012b), movability (van Polanen et al., 2012a), edges (Plaisier et al., 2008a) and temperature (Plaisier and Kappers, 2010) were all found to be salient features to touch. These features can be processed fast and early by the haptic system and by their presence they may disrupt the perception of other features.
Salient features stand out among their surroundings. Knowledge about which features are salient and which are not is therefore essential in situations where haptic stimulation is used for conveyance of information. One can think of very diverse situations. In gaming and in virtual worlds, haptic sensations may add to the experience of immersion; in navigation and communication, haptic stimulation provides information instead of or in addition to visual and auditory information; in teleoperations, complicated actions have to be performed either with or without vision. For the purpose of multisensory navigation and communication, Elliot and colleagues (2013) developed a model of tactile salience, which depends on user, environment, technology and tactile parameters. They developed a prototype system for the guidance of soldiers which received positive feedback from potential users. Also for the further development of robotic hands and hand prosthetics, understanding haptic perception is of eminent importance. Not only is the human sense of touch so sophisticated that it may inspire technology, it may also guide the direction of research as obviously salient features will be more important than non-salient features.
- Bergmann Tiest, W M; Kahrimanovic, M; Niemantsverdriet, I; Bogale, K and Kappers, A M L (2012). Salient material properties and haptic volume perception: The influences of surface texture, thermal conductivity, and compliance. Attention, Perception & Psychophysics 74(8): 1810-1818.
- Elliott, L R et al. (2013). Development of dual tactor capability for a soldier multisensory navigation and communication system. In: S Yamamoto (Ed.), HIMI/HCII 2013, Part II, Volume 8017 of Lecture Notes in Computer Science (pp. 46-55). Berlin: Springer-Verlag.
- Klatzky, R L and Lederman, S J (1995). Identifying objects from a haptic glance. Perception & Psychophysics 57(8): 1111-1123.
- Lederman, S J; Browse, R A and Klatzky, R L (1988). Haptic processing of spatially distributed information. Perception & Psychophysics 44(3): 222-232.
- Lederman, S J and Klatzky, R L (1997). Relative availability of surface and object properties during early haptic processing. Journal of Experimental Psychology: Human Perception and Performance 23(6): 1680-1707.
- Moore, T; Broekhoven, M; Lederman, S J and Wug, S (1991). Q’Hand: A fully automated apparatus for studying haptic processing of spatially distributed inputs. Behaviour Research Methods, Instruments & Computers 23: 27-35.
- Ni, B et al. (2014). Touch saliency: Characteristics and prediction. IEEE Transactions on Multimedia 16(6): 1779-1791.
- Panday, V; Bergmann Tiest, W M and Kappers, A M L (2012). Influence of local properties on haptic perception of global object orientation. IEEE Transactions on Haptics 5(1): 58-65.
- Pawluk, D; Kitada, R; Abramowicz, A; Hamilton, C and Lederman, S J (2010). Haptic figure-ground differentiation via a haptic glance. In: IEEE Haptics Symposium (pp. 63-66).
- Pawluk, D; Kitada, R; Abramowicz, A; Hamilton, C and Lederman, S J (2011). Figure/ground segmentation via a haptic glance: Attributing initial finger contacts to objects or their supporting surfaces. IEEE Transactions on Haptics 4(1): 2-13.
- Plaisier, M A; Bergmann Tiest, W M and Kappers, A M L (2008a). Haptic pop-out in a hand sweep. Acta Psychologica 128: 368-377.
- Plaisier, M A; Bergmann Tiest, W M and Kappers, A M L (2008b). Haptic search for spheres and cubes. In: M Ferre (Ed.), Haptics: Perception, Devices and Scenarios, Volume 5024 of Lecture Notes on Computer Science (pp. 275-282). Berlin/Heidelberg: Springer.
- Plaisier, M A; Bergmann Tiest, W M and Kappers, A M L (2009a). Salient features in three-dimensional haptic shape perception. Attention, Perception & Psychophysics 71(2): 421-430.
- Plaisier, M A and Kappers, A M L (2010). Cold objects pop out! In: A M L Kappers, J B F van Erp, W M Bergmann Tiest and F C T van der Helm (Eds.), Haptics: Generating and perceiving tangible sensations. Part II, Volume 6192 of Lecture Notes in Computer Science (pp. 219-224). Berlin/Heidelberg: Springer.
- Plaisier, M A; Kuling, I A; Bergmann Tiest, W M and Kappers, A M L (2009b). The role of item fixation in haptic search. In: J Hollerbach (Ed.), Proceedings 3rd Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (pp. 417-421). Salt Lake City, UT, USA: IEEE.
- Treisman, A M and Gelade, G (1980). A feature-integration theory of attention. Cognitive Psychology 12(1): 97-136.
- Treisman, A and Gormican, S (1988). Feature analysis in early vision: evidence from search asymmetries. Psychological Review 95(1): 15-48.
- Van Polanen, V; Bergmann Tiest, W M and Kappers, A M L (2012a). Haptic pop-out of movable stimuli. Attention, Perception & Psychophysics 74(1): 204-215.
- Van Polanen, V; Bergmann Tiest, W M and Kappers, A M L (2012b). Haptic search for hard and soft spheres. PLoS ONE 7(10): e45298.
- Van Polanen, V; Bergmann Tiest, W M and Kappers, A M L (2013). Integration and disruption effects of shape and texture in haptic search. PLoS ONE 8(7): e70255.
- Wolfe, J M (1998). What can 1 million trials tell us about visual search? Psychological Science 9(1): 33-39.
- Xu, M et al. (2012). Touch saliency. In: Proceedings of the 20th ACM International Conference on Multimedia (pp. 1041-1044).
- Itti, L (2007). Visual salience. Scholarpedia 2(9): 3327.
- Wolfe, J M and Horowitz, T S (2008). Visual search. Scholarpedia 3(7): 3325.
Note on terminology One final note about “touch saliency”. Ni, Xu and colleagues investigated the characteristics of what they call “touch saliency” (Xu et al., 2012; Ni et al., 2014). Although this term seems very relevant in the present context, it is not. What they mean by touch saliency are the finger fixation maps on touch screens, that might provide useful information about the regions of interest on the screen. As these regions can only be perceived and identified visually, this does not tell us anything about touch perception. A more proper term would be “touch map”.