|Gerald Westheimer (2011), Scholarpedia, 6(8):9973.
|revision #143502 [link to/cite this article]
Hyperacuity is the term applied to a sensory capability that transcends sampling limits set by discrete receiving elements.
Sensory discrimination is limited by the structure of the receptor apparatus that interfaces an organism with its environment, e.g., retinal cones in daylight vision. Acuity, literally sharpness, depends on the distance between these light receptors, because their output is coded for only their own location and cannot be further subdivided. To determine whether a shown visual target is single or double requires differentiated excitation of at least three receptors in a row, the two flanking ones more stimulated, and the intervening one less (Fig. 1). Established doctrine thus has visual acuity, the resolution performance of the eye, governed by retinal receptor spacing. Not accidentally, the quality of the eye’s optics has evolved to provide a match (Kaufman & Alm, 2003).
The many discrimination abilities in which the human observer surpasses the acuity limit, often by a large factor, are now called hyperacuity. Though the sensory input is always funneled through the primary receptor stage and thus remains subject to the compartmentalization here imposed, information on structural differences in the world of objects more minute than the spacing of the elements of the sensory mosaic can be extracted from the activity pattern within an ensemble of sensory neurons. Hyperacuity is the result of circuitry in the brain that distills this information.Vernier alignment acuity is a prime example of a hyperacuity: in the human fovea, two lines must be at least 1 arcminute apart to be assured that they are seen as separate (resolved), but a misalignment of 1/10 of this value can easily be detected in two abutting lines. This gives a hint of the two kinds of neural processing. For acuity, two activity peaks must be sufficiently sharp and far enough apart within the ensemble that the possibility of their overlap has been minimized and the differentiated excitation of the middle of a row of at least three receptors is assured. For vernier, it is the precision of the location of each peak that matters (Fig. 2); it is extracted by circuits that find the centroid of the activity through something akin to a vector sum within an ensemble’s activity, and this step is followed by a differencing operation, much like that in a differential amplifier. Hence it is not surprising that thresholds are immune to irrelevant signal perturbations common to the whole configuration, such as small overall pattern movements, but there is strong need for onset synchrony and similarity in color and contrast polarity of the components (Westheimer, 2008). Because, within limits, the larger the ensemble of neurons used for signal extraction for each of the constituents pattern elements, the better the performance, hyperacuity thresholds are subject to interference when there is crowding in the target domain (Levi et al. 1985).
The word acuity had been preempted by centuries of association with resolving or separating power. In vision, it is embodied in the time-honored 20/20 standard in eye charts, designed with letter strokes 1 arcmin wide, with corresponding enlargement for poorer acuity. But many spatial judgments transcend this limits, of which vernier alignment, to which the term hyperacuity was first attached (Westheimer, 1975), is only one.
Other notable examples of visual hyperacuity are:
- Curvature detection: the judgment of deviations from a contour’s rectilinearity;
- Sharpness or smoothness of an edge: detection of edge or line corrugations;
- Stereoacuity: the ability to discriminate very small differences in depth in three-dimensional configurations.
In each of these cases, thresholds are a small fraction of a cone diameter.
Elements of the detecting apparatus of sensory systems are not distributed continuously over the stimulus domain and for reasons of economy are usually spaced with little or no overlap. But when their responses are graded in intensity, location within the realm of stimuli can be ascertained with a much finer grain than the individual receptor stations. A good insight into the process is afforded by what is called anti-aliasing in computer displays. Only discretely localized compartments of LCD screens can be lit up, but by varying the brightness within a small group of adjoining pixels, small blobs whose positions are governed by their light centroids can be localized with sub-pixel resolution. This localizing ability or hyperacuity in no way nullifies the resolution limit (resolving power = acuity) mandated by the pixel spacing.
Such a view of neural processing has strong implications for the simplistic version of the neuron doctrine of perception, according to which each percept has its neural counterpart embodied in a single neuron. From the fact that locations in visual space can be determined to a few seconds of arc it need not be concluded that the neural visual field is tiled with the astronomical number of neurons that would be needed to represent that many locations. The alternate interpretation envisages local differencing circuits, far fewer in number, operating on a range of input signals to arrive at relative position estimates much better than the spacing of the ensemble of discrete units through which the image had been funneled. A mathematical analysis of this situation has been offered by Snippe & Koenderink (1992).
A specific and very instructive example of hyperacuity processing is in color vision. Here the number of stations in the stimulus spectrum is limited to three in normal observers, though in this particular case they have widely overlapping acceptor functions (spectral absorption curves). Finely-tuned differencing operations between their outputs allow thousands of hues to be differentiated.
Hyperacuity processing has also been identified in hearing, where many more pitches can be discriminated than there are hair-cell stations along the basilar membrane in the cochlea (Altes, 1989), and in touch, where the distinction between resolution (one Braille marker or two?) in terms of millimeters can be counterpoised to high-precision judgment of surface properties and distances in terms of micrometers (Loomis, 1975).
A concept related to hyperacuity but separate from it is so-called superresolution. One of its implementations is based on a prior statistical association between target minutiae that have not been transmitted to the image and others that have. By a kind of Bayesian inference a decision about absent target components is made from features that are contained in the image and the associated statistics.
- Altes RA 1989. Ubiquity of hyperacuity. J. Acoust. Soc. Am. 85, 943-952
- Kaufman PL, Alm A. 2003. Adler’s Physiology of the Eye St. Louis Mosby 10th Ed. Chapter 17 pp 453-569
- Levi DM, Klein SA, Aitsebaomo AP 1985. Vernier acuity, crowding and cortical magnification. Vision Research, 25, 963-977.
- Loomis JM 1979. An investigation of tactile hyperacuity. Sensory Processes, 3, 289-302.
- Snippe HP, Koenderink JJ 1992. Discrimination thresholds for channel-coded systems. Biological Cybernetics, 66, 543-551.
- Westheimer G 1975. Visual acuity and hyperacuity. Invest Ophthalmol, 14, 570-2.
- Westheimer G. 2008. Hyperacuity. In Squire, L.A. (ed.), Encyclopedia of Neuroscience. Academic Press, Oxford.