Teleofunctionalism

From Scholarpedia
William G. Lycan and Karen Neander (2008), Scholarpedia, 3(7):5358. doi:10.4249/scholarpedia.5358 revision #89103 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Karen Neander

Teleofunctionalism is the philosophical theory of mind according to which: what makes a given type of mental state the type that it is, is its distinctive job or function within its subject’s psychobiology.

Contents

Functionalism

Functionalism in the philosophy of mind is a theory about the nature of mental states, according to which they are characterized by their "functional roles." In normal human beings these roles would be performed by neurophysiological states of or in the brain. However, according to functionalism, mental states are not to be characterized neurophysiologically; rather, their natures are given in terms of their computational or perhaps other causal relations to perceptual input, other mental states, and behavioral output. For example, a pain is normally caused by damage to the body or other physical disturbance; it in turn causes belief in such damage, the desire that it stop, distraction, and also the typical withdrawal, favoring and distress behavior. Thus, the functionalist maintains that for a creature to be in a mental state of type M is for it to be in an inner state that has the functional role characteristic of M, no more and no less.

(One may hold a functionalist theory of some types of mental state but not of others.)

Functionalism was inspired by the computer model of the mind, and in particular by the idea that mental states are analogous to computational states characterized at the level of software rather than at that of whatever hardware is “realizing” them. Indeed, functionalists hold that mental states are “multiply realizable,” in that states of one and the same type—pain, the belief that spinach is dangerous, the desire for fame—may differ neurophysiologically across species or possibly even among humans; on some liberal accounts, they need not even be organic, but could consist in electronic states of robots. What matters is only that the defining causal role is being played, not what structures or materials play (or would play) it. An implication, considered very plausible, is that creatures with radically different biological systems may host some of the same kinds of mental states as we do. A creature from a planet orbiting Beta Geminorum could hope for world peace and wonder if there is life in neighboring planetary systems, even if its brain were biologically and chemically very unlike ours.

Early or so-called “machine” functionalism had it that human brains may be described at each of two levels. Of course there is the map of human neurophysiology to be provided by biologists. But the job of psychology is to work out the machine program that is being realized by the lower-level neurophysiology and to describe the same brain states in more abstract, computational terms. Behavior could then be explained by reference to stimuli and to intervening mental states such as beliefs and desires, type-identifying the mental states with functional or computational states as they went; a brain state M would be the mental state it is in virtue of its corresponding to such-and-such a computational state as described in the machine program. Such explanations would themselves presuppose nothing about neurophysiology, since the relevant psychological/computational generalizations would hold regardless of what particular biochemistry might happen to be realizing the program.

From machine functionalism to teleofunctionalism

Teleofunctionalism differs from machine functionalism primarily in the way in which it interprets the relevant notion of function, and in its emphasis on a more biological and multi-level approach to the mind.

In Sober’s (1985) phrase, teleofunctionalism “put[s] the function back into functionalism.” It employs a notion of a “normal” or “proper” function that is said to be teleological because it is a notion of what something is for, what the thing’s job is. Our pain has the normal or proper function of being caused by bodily damage and causing certain other inner states and motor outputs. (That is probably what the production of pain was selected for, if the neural pathways of our ancestors produced pain that played this role and this increased the average fitness of these ancestors, leading to the selection of these pathways.)

This notion of function permits the possibility of malfunction in a fairly strong sense: If an item has the actual causal disposition to do x it must be able to do x, while if an item has the normal or proper function to do x it might not be able to do x. For example, due to serious malfunction, some people do not experience pain in response to bodily damage, or their pain does not cause aversive behavior. This is consistent with their pain pathways having the same “normal” function as in “normal” subjects.

Machine functionalism conceived of psychological explanation in terms of subsumption under universal generalizations. However, a picture of psychological explanation has emerged which focuses on explaining psychological capacities by means of a “function-analytic” explanatory strategy. Such an explanation describes a system, such as a body or a brain, as a collection of nested components. Each component is identified, and its function is described, and the overall capacities of the system are explained in terms of their cooperative activity. The strategy is recursive, since each component can be treated as a system in its own right, with each of its components being identified, its function described, and so on. Machine functionalism assumed a two-tiered picture of human psychobiology, with the mental level corresponding to the software of the computer and the neurological level corresponding to the hardware. But such a two-tiered picture was overly simple even for computers, and it has in any event been largely abandoned in favor of this multi-level one.

The function-analytic strategy is employed in physiology as well as psychology, and on behalf of a more biological approach to the mind it can be argued that the functional analysis of the mind and brain are one. The aim of neurophysiology is to study the neural substrate and its relation to the psychological capacities to which it gives rise, and the aim of cognitive science is to study our psychological capacities and their relation to the neural substrate that gives rise to them. While it is controversial just how we should interpret the notion of function that is employed in these function-analytic explanations, it is clear that physiology employs a notion of function that permits the possibility of malfunction (which is not a mere notion of an actual causal disposition) and so is, arguably, the teleological notion of function.

Teleology and mental content

The appeal to teleology helps in the analysis of mental content, reference or aboutness (see intentionality). It is puzzling how a physical state of a brain can be about something, as a belief is about spinach, a desire is for fame, or a visual state represents a cat. It is even more puzzling how the state can be about something that does not exist; a belief can be about Pegasus or the free lunch as easily as it can be about spinach or Spinoza, and you may hallucinate a cat that is not real. So the aboutness relation cannot simply be a physical or otherwise natural relation between the mental state and its object.

A “teleosemantics” is a teleological theory of such mental content. To explain a hallucination of something that does not exist, the theorist can exploit the fact that physical devices can fail to perform their function. Dretske (1986) would say that your hallucinatory visual state represents a cat because its function is to “indicate” cats, i.e., to come on only when a cat is present, but that it can come on in a cat’s absence if, for instance, the visual system malfunctions. Millikan (1984) says that the state represents a cat because the presence of a cat is required for consumer systems (which use the representation) to perform their proper function in the “normal” way, although circumstances might often not be “normal.”

Another thing a theory of aboutness must explain is the normative nature of mental content. Content is normative in that it affords certain evaluations: we evaluate beliefs as true or false, memories as accurate or inaccurate, perceptions as veridical or illusory, motor instructions as correctly or incorrectly executed. The appeal to teleology helps here because function yields norms: a can opener is supposed to open cans, and there is something wrong if it cannot. Similarly, a heart is supposed to pump blood, and there is something wrong if it cannot.

Teleology and consciousness

The appeal to teleology also helps in the analysis of consciousness and the subjective character of conscious experience, in particular sensory qualities such as the painfulness of pain, the taste of an orange or the visual sensation of blue. Machine functionalism is too liberal, in that it implies, of some systems that are not plausibly conscious, that they are conscious (Block, 1978). For example, if the population of China were organized so as to implement a program that is functionally equivalent to our brains when we feel pain, the nation would not experience pain thereby. Machine functionalism is also challenged by spectrum reversal cases; intuitively, the functional roles of red and green visual sensations might be reversed while the sensations remain the same in experiential quality.

Several responses are available to the teleofunctionalist (Lycan, 1987). Since it is “functions all the way down,” if pain or a red sensation is not captured at a highly abstract level of description, we can use a more fine-grained one. One might worry that teleofunctionalism will thus collapse into the mind-brain identity theory, which type-identifies psychological with neurophysiological phenomena. But once we abandon the idea that there must be just two levels—the functional and the neurophysiological—we can type-identify psychological with teleofunctional phenomena at whichever level of functional organization is appropriate.

Further, a teleofunctional theory requires that the relevant states be for the right things. The physiological processes normally involved in seeing red are not for detecting green objects, nor are the ones normally involved in seeing green for detecting red ones. Nor is the population of China for running relays that implement a pain program. (At least, that is not its biological function; it has no biological function.)

Machine Functionalism also has a difficulty with pain that has abnormal causes or effects. The teleofunctionalist can respond that pain is to be individuated by the role it is supposed to play, not the role it is disposed to play when the pain pathways malfunction.

Finally, a teleological theory of content might account for the intentional content of conscious states. It is controversial to what extent the nature of consciousness is exhausted by intentionality, but representational theories have been fruitfully developed Lycan (1987, 1996), for example, proposes that sensory qualities are the intentional objects of sensory representations and that we are conscious of them when these sensory representations are in turn represented in introspection. He then employs this proposal to cast light on several traditional and contemporary philosophical puzzles about consciousness.

Objections to teleofunctionalism

Teleofunctionalism presupposes a notion of function that is in need of explication, and some philosophers reject it as merely metaphorical, or as a throwback to an Aristotelian or theistic metaphysic, or as legitimate only when analyzed in intentional terms. If such an objection were sound, the notion would be unsuited for explaining mental states in scientifically respectable terms. However, others have argued that it is scientifically respectable. The leading idea is that items have such functions in virtue of their etiology: e.g., the heart is said to have the proper function of pumping blood because that is what hearts were selected for by natural selection (Wright, 1973; Neander, 1991). Others maintain that the normality is statistical, though background selection theory explains how these functions can generate teleological explanations (Boorse, 2002). But the matter does not end there: For instance, Davies (2001) argues further that the appeal to etiology will not work because, he claims, evolutionary theory lacks the resources that would be needed to underwrite the norms required for the notions of teleofunction and malfunction.

There are more particular objections relating to teleofunctionalist theories of either intentionality or consciousness, but another objection that concerns both is based on “Swampman”, an imaginary individual who pops into existence through a purely accidental collision of particles. We can imagine that he is a molecule for molecule duplicate of a real person, although the similarity will be a sheer coincidence. On at least the etiological interpretation of teleofunctionalism, Swampman has no teleological functions and so no mental states that depend on them. Opinions on the significance of Swampman vary greatly. Some philosophers maintain that he (indeed) would not have either teleological functions or mental states, at least at first. Others think that he would, and so teleofunctionalism is false or, at the least, the teleofunctionalist cannot adopt any etiological theory of teleology.

References

Block, N. J. (1978) Troubles with Functionalism. in W. Savage, ed. Perception and Cognition: Minnesota Studies in the Philosophy of Science, Vol IX, Minneapolis: University of Minnesota Press.

Boorse, C. (2002) A Rebuttal on Functions. in Functions: New Essays in the Philosophy of Psychology and Biology, edited by, Andre Ariew, Robert Cummins and Mark Perlman. Oxford University Press: pp. 63-112.

Davies, P. S. (2001) Norms of Nature. Cambridge, MA: Bradford Books / MIT Press.

Dretske, F. (1986) Misrepresentation. in Radu Bogdan (ed) Belief: Form, Content and Function, New York: Oxford: 17-36. Press: Cambridge, MA.

Lycan, W. (1987) Consciousness. Cambridge, MA: Bradford Books / MIT Press.

Lycan, W. (1996) Consciousness and Experience. Cambridge MA: Bradford Books / MIT Press.

Millikan, R. (1984) Language, Thought and Other Biological Categories. Cambridge, MA: MIT Press.

Neander (1991) Functions as Selected Effects. Philosophy of Science, 58: 168-184.

Sober, E. (1985) Panglossian Functionalism and the Philosophy of Mind. Synthese 64, 165-193.

Wright, L. (1973) “Functions,” The Philosophical Review, 82: 139-168.

Internal references

  • Morten Overgaard (2008) Introspection. Scholarpedia, 3(5):4953.

Recommended reading

Cummins, R. (1983) Psychological Explanation. Cambridge, MA: Bradford Books / MIT Press.

Wimsatt, W. (1972) Teleology and the Logical Structure of Function Statements. in Studies in the History and Philosophy of Science 3(1): 1—80.

See also

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools