Visual illusions: An Empirical Explanation

From Scholarpedia
Dale Purves et al. (2008), Scholarpedia, 3(6):3706. doi:10.4249/scholarpedia.3706 revision #89112 [link to/cite this article]
(Redirected from Visual illusions: An Emprical Explanation)
Jump to: navigation, search
Post-publication activity

Curator: Dale Purves

The evolution of biological systems that generate behaviorally useful visual percepts has inevitably been guided by many demands. Among these are: (1) the limited resolution of photoreceptor mosaics (thus the input signal is inherently noisy); (2) the limited number of neurons available at higher processing levels (thus the information in retinal images must be abstracted in some way); and (3) the demands of metabolic efficiency (thus both wiring and signaling strategies are sharply constrained). The overarching obstacle in the evolution of vision, however, was documented several centuries ago when George Berkeley pointed out that information in retinal images cannot be mapped unambiguously back onto their real-world sources (Berkeley, 1709/1975). In contemporary terms, information about the size, distance and orientation of objects in space are inevitably conflated in the retinal image (Fig. 1; the same conflation obtains for illumination, reflectance and transmittance; see Purves & Lotto, 2003). In consequence, the patterns of light in retinal stimuli cannot be related to their generative sources in the world by any logical operation on images as such. Nonetheless, to be successful, visually guided behavior must deal appropriately with the physical sources of light stimuli, a quandary referred to as the “inverse optics problem.” As briefly explained here, visual illusions appear to arise primarily from the way the visual system contends with this problem. An implication of this approach is that the distinction between appearance and reality as discussed by philosophers like Plato (360 BC) and Kant (1787) finds new support.
Figure 1: The necessarily uncertain relationship between the information in the images that fall on the retina and their real-world sources. The same retinal projection can be generated by objects of different sizes at different distances from the observer, and in different orientations.

Contents

Definition of terms

The first term that needs to be defined in discussing visual illusions is 'percept.' The simplest definition of visual percepts is "visual experience" which works well in ordinary discourse, vision science, and thinking about visual illusions. Of course many visual stimuli result in appropriate behavioral responses even when we are not aware of having seen them; indeed the majority of visual processing and visually guided behavior falls in this category (think of all the visual information processed and responded unconsciously to keep your car on the road when driving to work while preoccupied with other thoughts). Perception is thus used here in the conventional sense of our visual experience or phenomenology.

The second term that must be defined is 'illusion.' The usual concept of an illusion is a percept that fails to agree with the real world measurements made with devices such as photometers, spectrophotometers, rulers, protractors, and so on. Although this definition is a good start, the evidence discussed here leads to the counterintuitive conclusion that all visual percepts are illusory in this sense, and that the textbook illusions that people have made so much of are only the more obvious examples of the normal discrepancy between stimuli and percepts (the more flagrant discrepancies arising for the reasons explained below).

Approaches to understanding visual illusions

Most modern investigators have sought to explain vision, and by the same token visual illusions, in terms of the response properties of neurons in the primary and higher order visual cortices. The general idea is that the responses of neurons encode the biologically useful features of light stimuli that fall on the retina, and ultimately generate percepts that correspond to the physical properties of objects in the real world. In this conception, illusions arise because neurobiological constraints do not always allow this goal to be met.

A second approach to understanding visual percepts is predicated on the need to respond successfully to the statistical characteristics of natural scenes. Many aspects of the neuronal responses to visual stimuli are attuned to aspects of natural images that occur with high regularity, and are thus most likely to be useful guides to behavior (e.g., higher spatial frequencies [edges], orientations in the cardinal axes, middle wavelengths, slower speeds, and so on) (Atick and Redlich, 1992; Field, 1994). Consistent with this idea, some neuronal cell receptive fields in the primary visual cortex (V1) look very much like the filters used to produce the relevant basis functions of images, and the organization of these fields can be predicted from images of natural scenes (Atick and Redlich, 1992; Field, 1994; Dong and Atick, 1995; Bell and Sejnowski, 1997; Simoncelli and Olshausen, 1997). Again, illusions would arise because of imperfect neuronal operation in this filtering framework.

A third approach has focused on solving the problems presented for vision in terms of the limitations inherent in the processing of complex information. Thus, noise in visual processing arises from both the complexity of natural images and the neuronal mechanisms by which image information is processed. This approach often uses Bayesian decision theory as both a tool and a conceptual framework to address visual experience and illusion (Knill and Richards, 1996; Rao and Olshausen et al., 2002; Weiss, Simoncelli and Adelson, 2002; Stocker and Simoncelli, 2006; Doya, Pouget and Rao, 2007; Chater, Tenenbaum and Yuille, 2007).

Yet another approach is predicated squarely on the idea that the inverse optics problem is the central challenge in vision, seeing this obstacle as the major force that has determined the nature of visual percepts (and thus textbook illusions) over the course of evolution (Purves and Lotto, 2003; Howe and Purves, 2005a; Howe, Lotto, and Purves, 2006). In this conception, percepts are determined empirically according to the image-source relationships that humans have been exposed to over accumulated experience. Precedents for this latter approach are evident in Helmholtz’s concept of unconscious inference (Helmholtz, 1924), the "organizational principles" of Gestalt psychologists, and in the empirical explanation of some illusions proposed by modern psychologists such as Richard Gregory and others who have interpreted illusions in terms of what abstract visual stimuli represent in natural scenes (Gregory, 1966/1967).

Explaining illusions in empirical terms

Although each of these four approaches has sometimes been used to rationalize particular visual illusions, only the empirical approach to visual perception has sought to explain the full spectrum of these phenomena in a single theoretical and experimental framework. The hypothesis is that all visual percepts are generated empirically to facilitate successful behavior, and were never intended to correspond to the physical properties of the world or our measurements of these properties. From this perspective, illusions do not reflect any inadequacy or imperfection of visual function, but are rather signatures of its core strategy. The experimental approach to validating this hypothesis is to use natural image databases as proxies for accumulated human experience with some aspect of the visual world. If this theory is correct, then the visual percepts elicited by any given stimulus should be predictable on the basis of such data, classical illusions being predicted as well as perceptions that have not been so categorized because they are less markedly discrepant with some physical measurement.

The following example of the perception of a line indicates how this wholly empirical approach has been applied, the predictions it makes, and the challenging perceptual observations it can make sense of. As described in introductory psychology texts, the way we see the length of lines (or spatial intervals generally) is peculiar in that lines of the same physical length appear differently long in different presentations (Fig. 2).
Figure 2: The perception of the length of the same line varies as a function of the way the line is presented. (A). The Mueller-Lyer illusion (the line decorated with arrow tails looks longer than the line decorated with arrowheads). (B). The Ponzo illusion (the upper line looks longer than the lower line). (C). The inverted-T illusion (the vertical line looks longer than the horizontal one).
By the definition given earlier, such percepts are conventional examples of visual illusions. The simplest instance of this phenomenology is that vertical lines look longer than horizontal lines (see Fig. 2C). As demonstrated repeatedly over the years, this effect is actually a good deal more subtle in that the length seen varies systematically with the 2-D orientation in the retinal image (Pollock & Chapanis, 1952; Cormack & Cormack, 1974; Craven, 1993). Thus, the same physical line appears shorter or longer, with a horizontal line having the minimum apparent length and, oddly enough, a line projected on the retina that is about 30° off vertical having the maximum apparent length (Fig. 3).
Figure 3: Different perceived line lengths as a function of orientation. (A) The horizontal line in this figure looks shorter than the vertical or oblique lines, despite the fact that all the lines are identical in physical length. The horizontal/vertical comparison here is similar to the effect elicited by the T-illusion in Figure 2C. (B) Quantitative assessment of the apparent length of a line reported by subjects as a function of its orientation in the retinal image (orientation is expressed as the angle between the line and the horizontal axis). The maximum length seen by observers occurs when the line is oriented approximately 30° from vertical, at which point it appears about 10-15% longer than the minimum length seen when the orientation of the stimulus is horizontal. The data shown is an average of psychophysical results reported in the literature [29-31].((B) is after Howe and Purves, 2002)
In explaining these observations with an empirical framework, the first step is to determine the relative frequency of different straight-line intervals that objects project on the retina at different orientations (Fig. 4A, B). The proxy for past human experience with the relevant geometrical relationships is a database of 3-D information about the physical world obtained with a laser range scanner (Howe and Purves, 2002), as indicated in Figure 4. The idea would thus be to explain the observations in Figure 3 on the basis of the statistical relationship between line lengths projected on the retina and the possible sources of those lines in the world. The prediction, in empirical terms, would be that the perceived length elicited by any given line in the image should accord with the empirical rank of the projected line in accumulated human experience.
Figure 4: Sampling straight lines in a range image database. (A) The pixels in a region of one of the images in the database are represented diagrammatically by grid squares; the connected black dots indicate a series of templates for sampling straight lines at different projected orientations. (B) Examples of straight-line templates overlaid on a typical image. White templates indicate projected lines in the image that could have been generated by underlying regions in 3-D space, and were thus accepted as valid samples of geometrical straight lines. Black lines indicate sets of projected lines in the image that could not have been generated by the underlying regions in 3-D space, and were therefore rejected. Templates of various lengths were systematically applied to different regions in the range image database, yielding about 120 million valid samples of the physical sources of straight-line projections.(From Howe and Purves, 2005a)
The result of this image-source analysis is shown in the probability distributions in Figure 5A (Howe and Purves, 2005a; Howe and Purves, 2002). It is apparent that these distributions vary systematically for lines projected in different orientations. It follows that the cumulative probability distributions of the sources of lines computed from the probability distributions in Figure 5A will also differ, as shown in Figure 5B. Note that these distributions represent the sum of human experience with differently oriented lines projected on the retina. These values are the summed probabilities of occurrence of all the linear projections that have the same orientation and are equal in length or shorter than a given projected line on the image plane. Thus, each cumulative probability distribution provides an empirical scale on which a particular projected line length in a given orientation can be ranked.
Figure 5: Probability distributions of the physical sources of projected straight lines of different lengths generated from natural objects. (A) The probabilities of occurrence of the physical sources plotted as functions of the projected length of line stimuli for four different orientations. (B) Cumulative probability distributions derived from the distributions in (A). (From Howe and Purves, 2005a)

Consider, for instance, a projected line 7 pixels in length oriented at 20° (this length corresponds to ~1° of visual angle, a length often used in psychophysical studies). The cumulative distribution of the sources of lines oriented at 20° gives a cumulative probability value of 0.1494 for a line of this length. Thus 14.94% of the physical sources of lines oriented at 20° generated projections equal to or less than 7 pixels in length, and 85.06% generated longer lines. Accordingly, the empirical rank of a line of this projected length oriented at 20° is 14.94%. The empirical ranks of lines 7 pixels in length at different orientations ranging from 0-180° can be similarly determined from the relevant cumulative probability distribution.

In Figure 6 these data are compiled to show how these rankings vary as a function of line orientation for a projected line that subtends a particular distance on the retina. This systematic variation predicts how the perceived length of a line is expected to change as a function of orientation. Comparison of the function in Figure 6 with the psychophysical function in Figure 3B shows how well these predictions can explain this otherwise puzzling aspect of what we see.
Figure 6: The predicted psychophysical function (cf. Fig. 3B). The percentile rankings determined for lines 7 pixels in length over the full range of orientations, derived from the cumulative probabilities of the physical sources of linear projections in different orientations (cf. Fig. 5B). This function predicts the perception of the same line length presented in different orientations, and should be compared with the psychophysical reports of perceived line length in Figure 3B. (After Howe and Purves, 2005a)

The other line length effects illustrated in Figure 2 can also be explained in this way (Howe and Purves, 2005a). By the same token, equally odd and subtle perceptual phenomena in brightness (Yang and Purves, 2004), color (Long, Yang and Purves, 2006) and motion (Wojtach et al., 2008; Sung, Wojtach, and Purves, 2009) are predicted by similar analyses of image databases that, to a first approximation, represent the sum of human experience with luminance, spectral distribution, or image sequences, respectively. For example, in accord with the static geometric stimuli discussed above, well-known motion “illusions” such as the flash-lag effect (Hazelhoff & Wiersma, 1924; MacKay, 1958; Nijhawan, 1994; Whitney & Murakami, 1998; Berry, et al., 1999; Eagleman & Sejnowski, 2000; Kerzel & Gegenfurtner, 2003; Jancke, et al. 2004a; Jancke, et al., 2004b) and the aperture problem (Wallach, 1935; Hildreth, 1984; Nakayama & Silverman, 1988; Anderson & Sinha, 1997), as well as the perception of speed and direction more generally, can be explained in terms of the accumulated influence of images and their relationships to moving objects in the environment (Wojtach et al, 2008; Sung, Wojtach, and Purves, 2009; Wojtach, Sung, and Purves, 2009). Since the registration of movement is essential to successful behavior, the ability to provide an empirical basis for these motion perceptions grants further support for this approach to vision.

Implications

This empirical way of rationalizing visual illusions has some general implications that are worth noting. The first of these is a new way of thinking about the way vision and visual systems operate. In this framework, patterns of light falling on the retina generate percepts by activating visual circuitry that has, over the course of evolution, come to be associated with successful behavior in response to that pattern in the past. This link between images, percepts and physical reality, however, cannot have been created by associating features in the retinal image with features of the world: the inverse optics problem precludes any scheme of visual processing that could arguably relate the characteristics of a retinal image directly to its generative sources. Perceived qualities such as form, brightness, color, and motion thus have no logical meaning in the physical world, although the physical correlates of these qualities can of course be measured with instruments and studied in physical terms.

Nevertheless, by evolving circuitry that links retinal stimuli to behavior according to operational success, the challenge of the inverse problem can be met, with the degree of success depending on the amount of accumulated species (and individual) experience. In this conception, the percepts of an observer are not and cannot be representations of the scene at hand; on the contrary, they are reflexively elicited constructs determined by accumulated species experience over the eons with successful responses to the stimulus in question that are not really representations at all, at least in the usual sense of the word.

Initially at least, the concept of visual percepts as reflex responses based on circuitry that encodes and represents empirical success in response to an image rather than the present relationship between an image and its likely source seems highly counterintuitive. For one thing, this idea defies an overwhelming sense that we see the world as it really is and that we respond accordingly; in contrast, the evidence suggests that the perception of any scene is an operational construct based on millions of years of species experience that (because of the inverse problem) bears no direct relation to real-world objects. In an important sense, then, our experience of the world is always located behind a veil of appearances; visual percepts correlate with reality only because of a sufficient accumulation of empirical information. Secondly, this way of conceiving visual physiology turns the conventional wisdom about the nature of visual neuronal properties on its head. Rather than extracting features from images and passing them on to "higher-order" visual areas where they are combined again as percepts, this alternative framework implies that neurons in the primary and extrastriate visual pathways operate by having incorporated the accumulated empirical information about image-source relationships.

The key to discovering the neural correlates of visual illusions would be to consider how large populations of neurons could process stimuli empirically. Thus, in the case of the apparent length of a line offered above, it would be incorrect to approach the problem in terms of how features of projected line length are encoded and passed on to higher levels in the system. Rather, cortical activity would need to be understood as encoding the conjoint probability distribution of all the possible sources of the retinal stimulus. The fact that the perceptual variation of line length as a function of orientation (as well as other so-called illusions of brightness, color, geometric form, and motion) is so well predicted by the statistical link between images and their generative sources offers strong support for this kind of approach.

A few neurobiological clues to date suggest that biological visual circuitry is indeed subserving empirical demands. For example, enhanced responses to contrast boundaries (Hubel and Wiesel, 1962) as well as color-opponency responses (Hubel and Wiesel, 1968) are correlated with the basis functions of efficient statistical representations of natural images (Olshausen and Field, 1996; Lee, Wachtler, et al, 2002; Wachtler, Lee and Sejnowski, 2001; Caywood, Willmore et al, 2004). Moreover, some anatomical characteristics of the primary visual cortex, e.g. preferential horizontal connections between neurons tuned to similar orientations (Bosking, Zhang, et al, 1997) are also consistent with the incorporation in visual processing of natural image-source statistics (Geisler, Perry, et al, 2001; Goldberg, 1989). Such observations indicate that some physiological links to understanding perception in empirical terms are already known. Presumably the reliance on visual illusions to unravel the complex functional architecture of the visual system will be as great a boon as it has been to understand the basis of perception generally.

Summary

In summary, if the visual brain has evolved by associating images with behavioral responses appropriate to the nature of the environment, then understanding visual circuits in terms of image properties alone will be insufficient to explain perception. The underlying neural networks will not be describable without reference to the possible generative sources of the projected images experienced by an agent—animal or artificial—as it navigates the environment, and how these images are related to successful behavior.

The challenge of the inverse problem implies that biological visual systems must take advantage of the empirical links between the inherently ambiguous images and their possible generative sources in the real world. A rapidly growing body of evidence suggests that this information gradually accumulates in the structure and function of visual system circuitry as a result of the benefits of successful visually guided behavior. In this conception of vision, percepts—including all the classical visual illusions—are reflexive responses to patterns of light on the retina activate circuitry that, over the course of evolution, have come to represent biologically useful constructs. Experimental support for this way of understanding vision is its ability to predict anomalous percepts of brightness, color, form, and motion that have been difficult to explain in any other way. If this concept of vision is correct, then the detailed structure and function of visual system circuitry gleaned over the last half century will need to be rationalized in terms of this empirical framework. A further corollary of this way of conceiving vision, as indicated, is that all visual percepts are equally illusory; what are commonly called illusions are simply instances in which the differences between what one sees and measured reality are especially obvious.


References

Anderson, B.L. and Sinha, P. (1997). Reciprocal interactions between occlusion and motion computations. Proceedings of the National Academy of Sciences, USA, 94:3477-3480.

Atick, J.J. and Redlich, A.N. (1992) What does the retina know about natural scenes? Neural Computation, 4:196-210.

Bell, A. J. and Sejnowski, T.J. (1997). The 'independent components' of natural scenes are edge filters. Vision Research, 37:3327-3338.

Berkeley, G. (1709/1975). Philosophical Works Including Works on Vision. (Ayers, M.R. ed) London: Everyman/ J.M. Dent.

Berry, M.J., Brivanlou, I.H., Jordan, T.A., and Meister, M. (1999). Anticipation of moving stimuli by the retina. Nature, 398:334-338.

Bosking, W.H., Zhang, Y., Schofield, B., and Fitzpatrick, D. (1997). Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex. Journal of Neuroscience, 17:2112-2127.

Caywood, M.S., Willmore, B., and Tolhurst, D.J. (2001). Independent components of color natural scenes resemble V1 neurons in their spatial and color tuning. Journal of Neurophysiology, 91:2859-2873.

Chater, N., Tenenbaum, J.B. and Yuille, A. (2007). Probabilistic models of cognition: Conceptual foundations. Trends in Neuroscience, 10:287-291.

Cormack, E.O. and Cormack, R.H. (1974). Stimulus configuration and line orientation in the horizontal-vertical illusion. Perception & Psychophysics, 16:208-212.

Craven, B.J. (1993). Orientation dependence of human line-length judgments matches statistical structure in real-world scenes. Proceedings of the Royal Society of London, B 253:101-106.

Dong, D.W. and Atick, J.J. (1995). Statistics of natural time-varying images. Network: Computation in Neural Systems, 6:345-358.

Doya, K., Ishii, S., Pouget, A., and Rao, R.P.N. (2007). Bayesian Brain: Probabilistic Approaches to Neural Coding. Cambridge MA: MIT Press.

Eagleman, D.M. and Sejnowski, T.J. (2000). Motion integration and postdiction in visual awareness. Science, 287:2036-2038.

Field, D.J. (1994) What is the goal of sensory coding? Neural Computation, 6:559-601.

Geisler, W.S., Perry, J.S., Super, B.J., and Gallogly, D.P. (2001). Edge co-occurrence in natural images predicts contour grouping performance. Vision Research, 41:711-724.

Goldberg, D.E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Reading MA: Addison Wesley.

Gregory, R.L. (1966/1967). Eye and Brain. McGraw-Hill Book Company, New York, NY.

Hazelhoff, F.F. and Wiersma, H. (1924). Die Wahrnehmungszeit. Zeitschrift für Psychologie, 96:171-188.

Helmholtz, H. v. (1924). Helmholtz's Treatise on Physiological Optics. New York: Optical Society of America.

Hildreth, E.C. (1984). The Measurement of Visual Motion. Cambridge, MA: MIT Press.

Howe, C. Q. and Purves, D. (2002). Range image statistics can explain the anomalous perception of length. Proceedings of the National Academy of Sciences, USA, 99:13184-13188.

Howe, C.Q. and Purves, D. (2005a). Perceiving Geometry: Geometrical Illusions Explained by Natural Scene Statistics. New York: Springer.

Howe, C.Q., Lotto R.B. and Purves, D. (2006). Empirical approaches to understanding visual perception. Journal of Theoretical Biology, 241:866-875.

Hubel, D.H. and Wiesel, T.N. (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. Journal of Physiology, 160:106-154.

Hubel, D.H. and Wiesel, T.N. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology, 195:215-243.

Jancke, D., Chavane, F., Naaman, S., and Grinvald, A. (2004b). Imaging correlates of visual illusion in early visual cortex. Nature, 428:423-426.

Jancke, D., Erlhagen, W., Schöner, G., and Dinse, H.R. (2004a). Shorter latencies for motion trajectories than for flashes in population responses of primary visual cortex. Journal of Physiology, 556:971-982.

Kant, I. (1787/2003). A Critique of Pure Reason. (Kemp Smith, transl.) New York: Macmillian.

Kerzel, D. and Gegenfurtner, K.R. (2003). Neuronal processing delays are compensated in the sensorimotor branch of the visual system. Current Biology, 13:1975-1978

Knill, D. and Richards, W. (1996.) Perception as Bayesian inference. D. C. Knill and W. Richards, eds. Cambridge UK: Cambridge University Press.

Lee, T.W., Wachtler, T., and Sejnowski, T.J. (2002). Color opponency is an efficient representation of spectral properties in natural scenes. Vision Research, 42:2095-2103.

Long, F., Yang, Z. and Purves, D. (2006). Spectral statistics in natural scenes predict hue, saturation and brightness. Proceedings of the National Academy of Sciences, USA, 103: 6013-6018

MacKay, D.M. (1958). Perceptual stability of a stroboscopically lit visual field containing self-luminous objects. Nature, 181:507-508.

Nakayama, K. and Silverman, G.H. (1988). The aperture problem, I: Perception of nonrigidity and motion direction in translating sinuoidal lines. Vision Research, 28:739-746.

Nijhawan, R. (1994). Motion extrapolation in catching. Nature, 370:256-257.

Olshausen, B.A. and Field, D.J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607-609.

Plato (1986). The Republic. (G.M.A. Grube, transl.) Indianapolis, IN: Hackett.

Pollock, W.T. and Chapanis, A. (1952). The apparent length of a line as a function of its inclination. Quarterly Journal of Experimental Psychology, 4:170-178.

Purves, D. and Lotto, B. (2003). Why We See What We Do: An Empirical Theory of Vision. Sunderland MA: Sinauer.

Rao, R. P. N., Olshausen, B.A., and Lewicki, M.S. (2002). Probabilistic Models of the Brain: Perception and Neural Function. Cambridge, MA: MIT Press.

Simoncelli, E.P. and Olshausen, B.A. (2001). Natural image statistics and neural representation. Annual Review of Neuroscience, 24:1193-1216.

Stocker, A.A. and Simoncelli, E.P. (2006). Noise characteristics and prior expectations in human visual speed perception. Nature Neuroscience, 9:578-585.

Sung, K., Wojtach, W., and Purves, D. (2009). An empirical explanation of aperture effects. Proceedings of the National Academy of Sciences, USA, 106: 298-303.

Wachtler T., Lee T.W., and Sejnowski, T.J. (2001). Chromatic structure of natural scenes. Journal of the Optical Society of America, 18:65-77.

Wallach, H. (1935/1996). On the visually perceived direction of motion: 60 years later (Wuerger, S., Shapley, R., and Rubin, N., transl). Perception, 25:1317-1367.

Weiss, Y., Simoncelli, E.P., and Adelson, E.H. (2002). Motion illusions as optimal percepts. Nature Neuroscience, 5:598-604.

Whitney, D. and Murakami, I. (1998). Latency difference, not spatial extrapolation. Nature Neuroscience, 1:656-657.

Wojtach, W., Sung, K, and Purves, D. An empirical explanation of the speed-distance effect. Public Library of Science One, 4(8): e6771.

Wojtach, W., Sung, K., Truong, S., and Purves, D. An empirical explanation of the flash-lag effect. Proceedings of the National Academy of Sciences, USA, 105: 16338-16343.

Yang, Z. and Purves, D. (2004). The statistical structure of natural light patterns determines perceived light intensity. Proceedings of the National Academy of Sciences, USA, 101: 8745-8750.

Internal references

  • Olaf Sporns (2007) Complexity. Scholarpedia, 2(10):1623.
  • John Dowling (2007) Retina. Scholarpedia, 2(12):3487.

See also

Vision

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools