Integrated information theory

From Scholarpedia
Giulio Tononi (2015), Scholarpedia, 10(1):4164. doi:10.4249/scholarpedia.4164 revision #150725 [link to/cite this article]
(Redirected from Information Integration Theory)
Jump to: navigation, search
Post-publication activity

Curator: Giulio Tononi

Integrated information theory (IIT) attempts to identify the essential properties of consciousness (axioms) and, from there, infers the properties of physical systems that can account for it (postulates). Based on the postulates, it permits in principle to derive, for any particular system of elements in a state, whether it has consciousness, how much, and which particular experience it is having. IIT offers a parsimonious explanation for empirical evidence, makes testable predictions, and permits inferences and extrapolations.


Introduction: From phenomenology to mechanisms

Neuroscience has made great progress in explaining how brain mechanisms perform cognitive functions such as perceptual categorization, attention allocation, decision making, motor control, memory acquisition, language parsing, and so on. However, there seems to be an explanatory gap (Levine 1983) or “hard” problem (Chalmers 1996) if one tries to explain, even in principle, why a particular set of neural elements in a state (say, some neurons within my brain firing and some not) should give rise to experience, that is, “feel like something.”[1] Integrated information theory acknowledges that one cannot infer the existence of consciousness starting from physical systems (“from matter, never mind”). Instead, IIT takes the opposite approach: it starts from experience itself, by identifying its essential properties (axioms), and then infers what kind of properties physical systems must have to account for its essential properties (postulates). Then IIT employs the postulates to derive, for any particular system of elements in a state, whether it has consciousness, how much, and of which kind. From these premises, IIT offers a parsimonious explanation for empirical evidence, makes testable predictions, and permits inferences and extrapolations. An exposition of IIT and some of its implications can be found in (Tononi 2008, Tononi 2012, Oizumi, Albantakis et al. 2014, Tononi and Koch 2014). A discussion of the neurobiological evidence for IIT is found in (Tononi and Koch 2008).

Axioms: Essential properties of experience

The axioms of IIT are meant to capture the essential properties of experience. They were chosen with the notion that they should be:

  1. About experience itself;
  2. Evident: they should be immediately given, not requiring derivation or proof;
  3. Essential: they should apply to all my experiences;
  4. Complete: there should be no other essential property characterizing my experiences;
  5. Consistent: it should not be possible to derive a contradiction among them; and
  6. Independent: it should not be possible to derive one axiom from another.

Based on these criteria, the axioms of IIT are intrinsic existence, composition, information, integration, and exclusion (Figure 1).

The axioms of integrated information theory.
Figure 1: Axioms of Integrated Information Theory (IIT). See text for explanations. The illustration is a colorized version of Ernst Mach’s “View from the left eye” (Mach 1959).

Intrinsic existence

Consciousness exists: each experience is actual—indeed, that my experience here and now exists (it is real) is the only fact I can be sure of immediately and absolutely. Moreover, my experience exists from its own intrinsic perspective, independent of external observers (it is intrinsically real or actual).


Consciousness is structured: each experience is composed of multiple phenomenological distinctions, elementary or higher-order. For example, within one experience I may distinguish a book, a blue color, a blue book, the left side, a blue book on the left, and so on.


Consciousness is specific: each experience is the particular way it is—being composed of a specific set of specific phenomenal distinctions—thereby differing from other possible experiences (differentiation). For example, an experience may include phenomenal distinctions specifying a large number of spatial locations, several positive concepts, such as a bedroom (as opposed to no bedroom), a bed (as opposed to no bed), a book (as opposed to no book), a blue color (as opposed to no blue), higher-order “bindings” of first-order distinctions, such as a blue book (as opposed to no blue book), as well as many negative concepts, such as no bird (as opposed to a bird), no bicycle (as opposed to a bicycle), no bush (as opposed to a bush), and so on. Similarly, an experience of pure darkness and silence is the particular way it is—it has the specific quality it has (no bedroom, no bed, no book, no blue, nor any other object, color, sound, thought, and so on). And being that way, it necessarily differs from a large number of alternative experiences I could have had but I am not actually having.


Consciousness is unified: each experience is irreducible to non-interdependent, disjoint subsets of phenomenal distinctions. Thus, I experience a whole visual scene, not the left side of the visual field independent of the right side (and vice versa). For example, the experience of seeing the word “BECAUSE” written in the middle of a blank page is irreducible to an experience of seeing “BE” on the left plus an experience of seeing “CAUSE” on the right. Similarly, seeing a blue book is irreducible to seeing a book without the color blue, plus the color blue without the book.


Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, neither less (a subset) nor more (a superset), and it flows at the speed it flows, neither faster nor slower. For example, the experience I am having is of seeing a body on a bed in a bedroom, a bookcase with books, one of which is a blue book, but I am not having an experience with less content—say, one lacking the phenomenal distinction blue/not blue, or colored/not colored; or with more content—say, one endowed with the additional phenomenal distinction high/low blood pressure.[2] Moreover, my experience flows at a particular speed—each experience encompassing say a hundred milliseconds or so—but I am not having an experience that encompasses just a few milliseconds or instead minutes or hours.[3]

Postulates: Properties required of the physical substrate of experience

Assuming the above axioms capture the essential properties of every experience, there must be some reason why those properties are the way they are. IIT postulates that, for each essential property of experience, there is a causal property of a physical substrate that accounts for it (Figure 2). [4] Note that these postulates are inferences that go from phenomenology to physics, not the other way around. This is because the existence of one’s consciousness and its other essential properties is certain, whereas the existence and properties of the physical world are conjectures, though very good ones, made from within our own consciousness. [5]

Figure 2: Postulates of IIT. See text for explanations. The postulates of IIT are visualized for a physical system constituted of elements A, B and C. The cause (effect) repertoire graphs show the possible states of the system on the x-axis and the corresponding probabilities (e.g., \(P(ABC^c|ABC^p)\) ) on the y-axis, where the superscripts p, c, f stand for past, current and future states.

For simplicity, in what follows physical systems will be considered to be constituted of elements in a state, for example neurons or logic gates. All that is required is that such elements have two (or more) internal states, inputs that can influence these states in a certain way, and outputs that in turn are influenced by these states.[6] The postulates associated with each axiom of experience are described below:

Intrinsic existence

To account for the intrinsic existence of experience, a system constituted of elements in a state must exist intrinsically (be actual): specifically, in order to exist, it must have cause-effect power, as there is no point in assuming that something exists if nothing can make a difference to it, or if it cannot make a difference to anything.[7] Moreover, to exist from its own intrinsic perspective, independent of external observers, a system of elements in a state must have cause-effect power upon itself, independent of extrinsic factors. Cause-effect power can be established by considering a cause-effect space with an axis for every possible state of the system in the past (causes) and future (effects). Within this space, it is enough to show that an “intervention” that sets the system in some initial state (cause), keeping the state of the elements outside the system fixed (background conditions), can lead with probability different from chance to its present state; conversely, setting the system to its present state leads with probability above chance to some other state (effect).


The system must be structured: subsets of the elements constituting the system, composed in various combinations, also have cause-effect power within the system. Thus, if a system \(\mathbf{ABC}\) is constituted of elements \(\mathbf{A}\), \(\mathbf{B}\), and \(\mathbf{C}\), any subset of elements (its power set), including \(\mathbf{A}\), \(\mathbf{B}\), \(\mathbf{C}\); \(\mathbf{AB}\), \(\mathbf{AC}\), \(\mathbf{BC}\); as well as the entire system, \(\mathbf{ABC}\), can compose a mechanism having cause-effect power. Composition allows for elementary (first-order) elements to form distinct higher-order mechanisms, and for multiple mechanisms to form a structure.


The system must specify a cause-effect structure that is the particular way it is: a specific set of specific cause-effect repertoires—thereby differing from other possible ones (differentiation). A cause-effect repertoire characterizes in full the cause-effect power of a mechanism within a system by making explicit all its cause-effect properties. It can be determined by perturbing the system in all possible ways to assess how a mechanism in its present state makes a difference to the probability of the past and future states of the system. Together, the cause-effect repertoires specified by each composition of elements within a system specify a cause-effect structure. Consider for example, within the system \(\mathbf{ABC}\) in Figure 3, the mechanism implemented by element \(\mathbf{C}\), an XOR gate with two inputs (\(\mathbf{A}\) and \(\mathbf{B}\)) and two outputs (the OR gate \(\mathbf{A}\) and the AND gate \(\mathbf{B}\)). If \(\mathbf{C}\) is OFF, its cause repertoire specifies that, at the previous time step, \(\mathbf{A}\) and \(\mathbf{B}\) must have been either in the state OFF,OFF or in the state ON,ON, rather than in the other two possible states (OFF,ON; ON,OFF); and its effect repertoire specifies that at the next time step \(\mathbf{B}\) will have to be OFF, rather than ON. Its cause-effect repertoire is specific: it would be different if the state of \(\mathbf{C}\) were different (ON), or if \(\mathbf{C}\) were a different mechanism (say, an AND gate). Similar considerations apply to every other mechanism of the system, implemented by different compositions of elements. Thus, the cause-effect repertoire specifies the full cause-effect power of a mechanism in a particular state, and the cause-effect structure specifies the full cause-effect power of all the mechanisms composed by a system of elements.[8]


The cause-effect structure specified by the system must be unified: it must be intrinsically irreducible to that specified by non-interdependent sub-systems obtained by unidirectional partitions. Partitions are taken unidirectionally to ensure that cause-effect power is intrinsically irreducible - from the system’s intrinsic perspective - which implies that every part of the system must be able to both affect and be affected by the rest of the system. Intrinsic irreducibility can be measured as integrated information (“big phi” or \(\Phi\), a non-negative number), which quantifies to what extent the cause-effect structure specified by a system’s elements changes if the system is partitioned (cut or reduced) along its minimum partition (the one that makes the least difference). By contrast, if a partition of the system makes no difference to its cause-effect structure, then the whole is reducible to those parts.[9] If a whole has no cause-effect power above and beyond its parts, then there is no point in assuming that the whole exists in and of itself: thus, having irreducible cause-effect power is a further prerequisite for existence. This postulate also applies to individual mechanisms: a subset of elements can contribute a specific aspect of experience only if their combined cause-effect repertoire is irreducible by a minimum partition of the mechanism (“small phi” or \(\varphi\))[10].


The cause-effect structure specified by the system must be definite: it is specified over a single set of elements—neither less nor more—the one over which it is maximally irreducible from its intrinsic perspective (\(\Phi^{\textrm{max}}\)), thus laying maximal claim to intrinsic existence. For example, within \(\mathbf{ABCDE}\) in Figure 3, many candidate systems could specify cause-effect structures, including \(\mathbf{AB}\), \(\mathbf{AC}\), \(\mathbf{BC}\), \(\mathbf{ABC}\), \(\mathbf{ABCD}\), \(\mathbf{ABCDE}\), and so on. Among these, the system that specifies the cause-effect structure that is maximally irreducible from its own intrinsic perspective is the set of elements \(\mathbf{ABC}\), rather than any of its subsets or supersets. With respect to causation, this has the consequence that the “winning” cause-effect structure excludes alternative cause-effect structures specified over overlapping elements, otherwise there would be causal overdetermination: if a mechanism in a state (say \(\mathbf{A}\) OFF) specifies a particular cause-effect repertoire within one system (\(\mathbf{ABC}\)), it should not additionally specify an overlapping cause-effect repertoire as part of other, overlapping systems (say \(\mathbf{AB}\) or \(\mathbf{ABCD}\)), otherwise one would be counting multiple times the difference that a mechanism makes. The exclusion postulate can be said to enforce Occam’s razor (entities should not be multiplied beyond necessity): it is more parsimonious to postulate the existence of a single cause-effect structure over a system of elements—the one that is maximally irreducible from the system’s intrinsic perspective—than a multitude of overlapping cause-effect structures whose existence would make no further difference. The exclusion postulate also applies to individual mechanisms: a subset of elements in a state specifies the cause-effect repertoire that is maximally irreducible (MICE) within the system (\(\varphi^{\textrm{max}}\)), called a core concept, or concept for short. Again, it cannot additionally specify a cause-effect repertoire overlapping over the same elements, because otherwise the difference a mechanism makes would be counted multiple times. A maximally irreducible cause-effect structure composed of concepts is called a maximally irreducible conceptual structure (MICS) , or conceptual structure for short.[11] The system of elements that specifies a conceptual structure is called a complex. It is useful to think of a conceptual structure as existing as a form in cause-effect space, whose axes are given by all possible past and future states of the complex. In this space, every concept is a point (star), whose size is given by its irreducibility \(\varphi^{\textrm{max}}\), and a conceptual structure is a “constellation” of points, that is, a form.[12]
Finally, the exclusion postulate also applies to spatio-temporal grains, implying that a conceptual structure is specified over a definite grain size in space (either quarks, atoms, neurons, neuronal groups, brain areas, and so on) and time (either microseconds, milliseconds, seconds, minutes, and so on), the one at which \(\Phi\) reaches a maximum. This means that, if cause-effect power at a coarser grain is more irreducible than at a finer grain then, from the intrinsic perspective of the system, the coarser grain of causation excludes the finer one (Hoel, Albantakis et al. 2013). Once more, this implies that a mechanism cannot specify a cause-effect repertoire at a particular temporal grain, and additional effects at a finer or coarser grain, otherwise the differences a mechanism makes would be counted multiple times.[13][14][15]

Identity: an experience is a conceptual structure that is maximally irreducible intrinsically

Together, the axioms and postulates of IIT provide a principled way to determine whether a set of elements in a state specifies a conceptual structure and, if so, to characterize it in every aspect.[16] The central identity proposed by IIT is then as follows: every experience is identical with a conceptual structure that is maximally irreducible intrinsically, also called "quale" sensu lato (Figure 3) (note that the identity is between an experience and the conceptual structure specified by a set of elements in a state, not between an experience and its physical substrate - the elements as such).[17] In other words, an experience is a “form” in cause-effect space. The quality of the experience—the way it feels due to its particular content of phenomenal distinctions—is completely specified by the form of the conceptual structure: the phenomenal distinctions are given by the concepts (qualia sensu stricto) and their relationship in cause-effect space. The quantity of the experience—the level to which it exists—is given by its irreducibility \(\Phi^{\textrm{max}}\).[18][19] The postulated identity between features of experiences and features of conceptual structures implies, for instance, that the breakdown of consciousness in sleep and anesthesia must correspond to a breakdown of conceptual structures; that the presence of distinct modalities and submodalities must correspond to distinct clusters of concepts in cause-effect space; that features that are bound phenomenologically (a blue book) must be bound in the conceptual structure, corresponding to irreducible higher-order concepts; that similar experiences must correspond to similar conceptual structures, and so on (see Section 5: Predictions and explanations). [20]

Figure 3: The central identity of IIT: An experience is a maximally irreducible conceptual structure (a quale sensu lato, Q) composed of concepts specified over itself by a complex constituted of elements in a state - a “form” in cause-effect space. The figure shows a didactic example: the system constituted of logic gates A,B,C,D,E at the bottom, analyzed based on the postulates of IIT (Oizumi, Albantakis et al. 2014), contains a complex ABC. The complex in its present state specifies over itself a conceptual structure—a maximally irreducible cause-effect structure made of concepts (maximally irreducible cause-effect repertoires). On the right, the conceptual structure is presented as the set of concepts specified by a mechanism of the system in its present state over all past and future states of the system. In the middle, it is presented as a 2-D projection in which the cause-effect repertoire of each concept is a “star” in cause-effect space, where each axis is a possible past (in blue) and future (in green) state of the complex, and the position along the axis is the probability of that state. The position of each star in cause-effect space specifies how the corresponding concept changes the “form” of the quale and thus how it contributes to experience as a phenomenal distinction or quale sensu stricto (q). The size of the star (\(\varphi^{\textrm{max}}\)) measures how irreducible the concept is and thus how much it contributes to experience. The overall “form” of the conceptual structure or quale sensu lato (Q) (constellation of stars) is identical (\(\equiv\)) to the quality of the experience, how the experience feels. The intrinsic irreducibility of the entire conceptual structure, \(\Phi^{\textrm{max}}\), measures how much consciousness there is—the quantity of experience. Different forms correspond to different experiences: they feel the way they do—red feeling different from blue or from a headache—because of the distinct shape of their qualia.

Predictions and explanations

The identity proposed by IIT implies that, ultimately, all qualitative features of every experience correspond to geometrical features of conceptual structures specified by a system of elements in a state. Of course, assessing this identity systematically is difficult, mathematically, computationally, and experimentally: mathematically, because of the need to develop tools to properly characterize and classify the “forms” of high-dimensional conceptual structures; computationally, because of the combinatorial complexity of deriving conceptual structures from elements in a state; and experimentally, because of the requirement to establish whether changes in the physical substrate of our own consciousness are related to changes in experience as predicted by IIT. Nevertheless, the proposed identity can already suggest some simple predictions, as well as provide a parsimonious explanation for known facts about the physical substrate of consciousness. Some of these predictions have been tested, though only in an indirect and approximate manner, while others are in principle testable but technically demanding. A few examples follow:

  1. A straightforward experimental prediction of IIT is that the loss and recovery of consciousness should be associated with the breakdown and reemergence of conceptual structures. While it is currently not possible to calculate the conceptual structure specified by a human brain in a particular state, computer simulations show that systems that specify conceptual structures of high \(\Phi^{\textrm{max}}\) must be both effectively interconnected (integration) and have a large repertoire of differentiated states (information) (Balduzzi and Tononi 2008, Oizumi, Albantakis et al. 2014).[21] By contrast, when the effective connectivity among the elements is reduced, disrupting integration, or it becomes homogeneous, disrupting information, \(\Phi^{\textrm{max}}\) is low. This prediction has been addressed using transcranial magnetic stimulation (TMS) in combination with high-density electroencephalography (EEG) in subjects who were alternately awake and conscious, asleep and virtually unconscious (dreamless sleep early in the night), and asleep but conscious (dreaming). The results show that, as predicted, loss and recovery of consciousness are associated with a breakdown and recovery of the brain’s capacity for information integration (i.e. the capacity to specify conceptual structures of high \(\Phi^{\textrm{max}}\)) (Massimini, Ferrarelli et al. 2005, Massimini, Ferrarelli et al. 2010, Casali, Gosseries et al. 2013). Similar results have been obtained with various general anesthetics (Ferrarelli, Massimini et al. 2010, Casali, Gosseries et al. 2013). In these studies, if a subject is conscious when the cerebral cortex is probed with a pulse of current induced by the TMS coil from outside the skull, the cortex responds with a complex pattern of reverberating activations and deactivations that is both widespread (integrated) and differentiated in time and space (information rich). By contrast, when consciousness fades, the response of the cortex becomes local (loss of integration) or global but stereotypical (loss of information). Note that, throughout sleep, the cortex remains active at levels not dissimilar from those of wake. IIT also predicts that during generalized seizures associated with loss of consciousness, information integration should be low despite an increased level of activity and synchronization.
  2. IIT also predicts that brain lesions will make a person unconscious if and only if they severely disrupt the capacity for information integration. Moreover, the level of consciousness, as (roughly) assessed in neuropsychological exams, should co-vary with the \(\Phi^{\textrm{max}}\) value of the dominant conceptual structure. Recent TMS-EEG studies in patients with severe brain damage, with or without loss of consciousness (patients who were vegetative, minimally conscious, emerging from minimal consciousness, or conscious but “locked-in”), are consistent with this prediction (Casali, Gosseries et al. 2013).
  3. IIT provides a principled and parsimonious way to account for why certain brain regions appear to be essential for our consciousness while others do not. For example, widespread lesions of the cerebral cortex lead to loss of consciousness, and local lesions or stimulations of various cortical areas and tracts can affect its content (for example, the experience of color). A prominent feature of the cerebral cortex is that it is comprised of elements that are functionally specialized and at the same time can interact rapidly and effectively (when awake or dreaming). According to IIT, this is the kind of organization that can yield a comparatively high value of \(\Phi^{\textrm{max}}\). On the other hand, lesions of the cerebellum do not affect our consciousness in any obvious way, although the cerebellum is massively interconnected with the cerebral cortex and has four times more neurons. This paradox can be explained by considering that the cerebellum is composed of small modules that process inputs and produce outputs largely independent of each other. As suggested by computer simulations, a system thus organized, even if each module is tightly connected with a complex of high \(\Phi^{\textrm{max}}\) (the cortical complex), will remain excluded from the conceptual structure of the latter, nor will it form a complex on its own (at best it would decompose into many mini-complexes each having low \(\Phi^{\textrm{max}}\)). Similar considerations apply to input and output pathways to a cortical complex. Indeed, there is no direct contribution to our consciousness from neural activity within peripheral sensory and motor pathways, or from neural circuits looping out and in of the cortex into subcortical structures such as the basal ganglia, despite their manifest ability to affect cortical activity and to influence the content of experience. IIT predicts that, despite massive interconnections with the cortex, these subcortical structures should remain outside the local maximum of integrated information centered in the cerebral cortex.
  4. It remains to be seen whether the neural substrate of our consciousness (the main or “major” complex) is distributed to most cortical areas, or only to a subset of them, for example chiefly posterior areas, and whether it includes all cortical layers, and perhaps the thalamus, or only certain layers, for example superficial ones, or only particular cell types. It also remains to be established whether this neural substrate is fixed or can vary to some extent. Whatever the answer, IIT predicts that in each case the neural substrate of consciousness should be a local maximum of information integration.
  5. In principle, the major complex can vary (expand, shrink, split, and move), as long as it is a local maximum of information integration. For example, experiences of “pure thought”, which can occur in wakefulness and especially in some dreams, may be specified by a neuronal complex that is smaller and substantially different than the complex specifying purely perceptual experiences.
  6. It is well established that, after the complete section of the corpus callosum—the roughly 200 million fibers that connect the cortices of the two hemispheres—consciousness is split in two: there are two separate “flows” of experience, one associated with the left hemisphere and one with the right one. An intriguing prediction of IIT is that, if the efficacy of the callosal fibers were reduced progressively, there would be a moment at which, for a minor change in the traffic of neural impulses across the callosum, experience would go from being a single one to suddenly splitting into two separate experiencing minds. The splitting of consciousness should be associated with the splitting of a single conceptual structure into two similar ones (when two maxima of integrated information supplant a single maximum). Under certain pathological conditions (for example, dissociative disorders such as hysterical blindness), and perhaps even under certain physiological conditions (say “autopilot” driving while having a phone conversation), such splits may also occur among cortical areas within the same hemisphere in the absence of an anatomical lesion. Again, IIT predicts that in such conditions there should be two local maxima of information integration, one corresponding to a “major” complex and one or more to “minor” complexes (Mudrik, Faivre et al. 2014).
  7. A counterintuitive prediction of IIT is that a system such as the cerebral cortex may be conscious even if it is nearly silent, because it would still be specifying a conceptual structure, though one composed purely of negative concepts. Such a silent state is perhaps approximated through certain meditative practices that aim at reaching “naked awareness” without content (Sullivan 1995). This corollary of IIT contrasts with the common assumption that neurons only contribute to consciousness if they are active in such a way that they “signal” or “broadcast” the information they represent (Baars 1988, Dehaene and Changeux 2011). States of naked awareness should be contrasted with states of unawareness, occurring for example during deep sleep or anesthesia, in which cortical neurons are not inactive but inactivated (due to bistability of their membrane potential or active inhibition), and thus cannot specify a conceptual structure.
  8. Similarly, IIT predicts that a particular brain area can contribute to experience even if it is inactive, but not if it is inactivated. For example, if one were presented with a plate of spinach drained of color, green-selective neurons in the color areas would remain inactive. Thus one would experience and report strange spinach that is gray rather than green. By contrast, if the same area were not just inactive, but inactivated due to a local lesion, the phenomenal distinctions corresponding to colors would be lacking altogether. While presumably one would still report that the spinach is “gray,” in this case “gray” cannot mean the same as when color areas are intact, i.e. not green, not red, and so on. This seems consistent with the behavior of a rare patient with complete achromatopsia and anosognosia due to an extensive lesion of color areas (von Arx, Muri et al. 2010). When presented with green spinach, the patient reports that the spinach is gray, but does not realize nor concede that something is wrong with his experience. Although he “knows” that spinach is green, he altogether lacks the phenomenal distinction green/not green.
  9. The elementary concepts that make up an experience should be specified by physical elements having a spatial grain that leads to the conceptual structure having the highest value of \(\Phi\), as opposed to finer or coarser grains (for example, local groups of neurons rather than neurons or brain areas).
  10. The duration of experience should be associated with the time interval at which the relevant physical elements lead to the conceptual structure having the highest value of \(\Phi\), as opposed to finer or coarser grains (for example, hundred milliseconds rather than a millisecond or ten seconds). [22]
  11. The activity states that matter for experience are differences that make most difference to the major complex, for example bursting, high mean firing, low mean firing, irrespective of finer or coarser graining of states.
  12. The dynamic binding of phenomenological distinctions—say, seeing a red triangle in the middle—occurs if and only if neural mechanisms corresponding to the separate features together specify a cause-effect repertoire over the major complex that is irreducible to their separate cause-effect repertoires. The same applies to temporal binding, say the first and the last note of an arpeggio, as long as they are perceived together as components of the same chord.
  13. The organization of experience into modalities and submodalities (sight, hearing, touch, smell, taste and, within sight, color and shape) should correspond to subsets of concepts clustered together in cause-effect space (modes and submodes) within the same conceptual structure.
  14. The spatial structure that characterizes much of experience, exemplified by two-dimensional visual space, is extremely rich, including a multitude of distinct spatial locations, their relative ordering, their distances, and so on. Therefore, when we experience space, there should be a conceptual structure composed of a multitude of concepts “topographically” organized over its elements (specifying their unique identities, relative ordering, distances, and so on). Intriguingly, a large number of cortical areas are organized like two-dimensional grids, which seem ideally suited to specify conceptual sub-structures with the required features. Moreover, manipulating these grids can alter or abolish the corresponding aspects of experience, including the overall experience of space (see the conscious grid).
  15. The ‘categorical’ structure of other aspects of experiences (e.g. faces, animals, objects) should correspond to a very different organization, with high-level concepts that specify “invariants” (disjunctions of conjunctions) over elements specifying spatial structure. The converging/diverging feedforward-feedback architecture (pyramids) of the connections linking higher order areas with topographically organized areas (grids) seems ideally suited to specify conceptual sub-structures with the required features.
  16. The expansion/refinement of experience that occurs through learning (as when one becomes a connoisseur in some domain) should translate into a corresponding refinement of shapes in cause-effect space, due to the addition/splitting of concepts.
  17. Similarities/dissimilarities between experiences should translate into distances between conceptual structures in cause-effect space.
  18. Unconscious determinants of experience (e.g. the automatic parsing of sound streams into audible words) should be associated with mechanisms that provide inputs to the major complex but remain excluded from it.

Extrapolations: From mechanisms to phenomenology

The identity proposed by IIT must first be validated in situations in which we are confident about whether and how our own consciousness changes, such as the ones listed above. Only then can the theory become a useful framework to make inferences about situations where we are less confident—that is, to extrapolate phenomenology from mechanisms. Such situations include, for example, brain-damaged patients with residual areas of cortical activity, babies, animals with alien behaviors and alien brains, digital computers that can outperform human beings in many cognitive tasks but may be unconscious, and physical systems that may intuitively seem too simple to be associated with experience but may be conscious.[23] For example, IIT implies that consciousness can be graded. While there may well be a practical threshold for \(\Phi^{\textrm{max}}\) below which people do not report feeling much, this does not mean that consciousness has reached its absolute zero. Indeed, according to IIT, circuits as simple as a single photodiode constituted of a sensor and a memory element can have a minimum of experience (Oizumi, Albantakis et al. 2014). Moreover, a simple but large two-dimensional grid of appropriate physical elements could be highly conscious, even if it were doing “nothing” (all binary elements off), and even if it were disconnected from the rest of the world (no external inputs and outputs, the conscious grid). In fact, since according to IIT experience is a maximum of intrinsically irreducible cause-effect power, it exists whenever a physical system is appropriately organized (see also Barrett 2014). On the other hand, IIT also implies that aggregates of conscious entities—such as interacting humans—have no consciousness, since by the exclusion postulate only maxima of \(\Phi\) are conscious.[24] Finally, IIT implies that complicated devices may be unconscious—for example purely feed-forward networks in which one layer feeds the next one without any recurrent connections—even though they may perform sophisticated functions, such as finding and recognizing faces in images. By extension, IIT implies that certain systems can be functionally equivalent from the extrinsic perspective while differing radically from the intrinsic perspective – indeed, one may be fully unconscious and the other one fully conscious (Oizumi, Albantakis et al. 2014). For example, it may soon be possible to program a digital computer to behave in a manner identical to that of a human being for all extrinsic intents and purposes. However, from the intrinsic perspective the physical substrate carrying out the simulation in the computer—made of transistors switching on and off at a time scale of picoseconds—would not form a large complex of high \(\Phi^{\textrm{max}}\), but break down into many mini-complexes of low \(\Phi^{\textrm{max}}\) each existing at the time scale of picoseconds. This is because in a digital computer there is no way to group physical transistors to constitute macro-elements with the same cause-effect power as neurons, and to connect them together such that they would specify the same intrinsically irreducible conceptual structure as the relevant neurons in our brain. Hence the brain is conscious and the computer is not - it would have zero \(\Phi\) and be a perfect zombie. [25] This would hold even for a digital computer that were to simulate in every detail the working of every neuron of a human brain, such that what happens to the virtual neurons (the sequence of firing patterns and ultimately the behaviors they produce) is the same as what happens to the real neurons. On the other hand, a neuromorphic computer made of silicon could in principle be built to realize neuron-like macro-elements that would exist intrinsically and specify conceptual structures similar to ours. Finally, IIT implies that aggregates of conscious entities—such as interacting humans—have no consciousness, since by the exclusion postulate only maxima of \(\Phi\) are conscious.[24]


The principles of IIT have implications concerning several issues, some of which are briefly summarized below.

  • Existence and maximally irreducible cause-effect power. Based on the axioms of consciousness, IIT formulates a criterion for existence: existence requires having maximally irreducible cause-effect power; intrinsic existence, independent of an external observer/manipulator, requires having maximally irreducible cause-effect power upon oneself. Given this definition, IIT distinguishes between various kinds of ontological irreducibility, including constitutive irreducibility (macro to micro) and compositional irreducibility (whole to parts).
Constitutive irreducibility occurs if a coarser grain (macro) has more cause-effect power than a finer grain (micro), in space and/or time. Simulated examples in which cause-effect power is quantified rigorously show that macro-elements (for example, a group of neurons) can have more cause-effect power than the aggregate of the constituting microelements (the neurons that make up the group) either in the presence of noise or of degeneracy (see below). Moreover, cause-effect power over an interval of a tenth of a second may be higher than the aggregate cause-effect power over a hundred intervals of one millisecond each, even though seconds are constituted of milliseconds. Constitutive irreducibility can occur if there is indeterminism (causal divergence from the present state to many future states as in noise) and/or degeneracy (causal convergence of many past states onto the present state) (Hoel, Albantakis et al. 2013). Since maximally irreducible cause-effect power is the criterion for existence, one must conclude that if the cause-effect structure specified by macro-elements cannot be reduced to that specified by micro-elements, it must exist as such. [26] That the macro-level can actually do “causal work” rather than being epiphenomenal stands in stark contrast with the reductionist notion that causation happens exclusively at the micro level (Kim 2010). [27]
Compositional irreducibility (whole to parts) occurs if the cause-effect repertoire and structures specified by the elements constituting a system cannot be partitioned without a loss. Several examples of compositional irreducibility are presented in (Oizumi, Albantakis et al. 2014). At the level of individual mechanisms, if two or more elements together specify causes and effects that cannot be reduced to those of each element separately, there is “binding”. In this case, both the cause-effect repertoires of each individual element and their joint repertoire can coexist. At the level of the system, if the system cannot be partitioned without a change in the cause-effect structure it specifies, there is “structural” irreducibility. Compositional irreducibility stands in contrast to the reductionist notion that causation happens exclusively over first-order elements.
It is useful to distinguish between what actually exists – a maximally irreducible cause-effect structure - and its substrate – the set of constituting elements whose state can be observed and manipulated with the available tools in order to reveal the structure. The smallest elements having extrinsic existence, i.e. specifying a cause-effect repertoire, can be considered as the elementary physical substrate out of which everything that exists must be constituted (“atoms” in the original sense of Democritus). Crucially, IIT emphasizes that the substrate does not exist as such, separately from the cause-effect structure it specifies; rather, it exists as that structure. Also, IIT emphasizes that a cause-effect structure is physical, rather than mathematical, because it has its particular cause-effect properties - its particular nature - rather than merely representing a set of numbers that could take other values.[28]
Finally, IIT distinguishes between cause-effect structures that exist extrinsically (for something else) and those that also exist intrinsically (for themselves). From the extrinsic perspective one can consider any subset of elements (some serving as inputs and some as outputs), at any spatio-temporal level and grain size, as long as one has appropriate tools for performing observations, manipulations, and fine-grainings/partitions over the elements. Doing so will typically show a multitude of cause-effect structures over different subsets of elements and spatio-temporal grains, all of which can be said to exist extrinsically (Hoel, Albantakis et al. 2013). From the intrinsic perspective, however, over a set of elements and spatio-temporal grains only one cause-effect structure exists – the one that is maximally irreducible intrinsically, achieving highest \(\Phi\). This maximum of intrinsic irreducibility of a specific form - a conceptual structure – is what experience is.  Said the other way around, intrinsic existence is consciousness, once it is understood that to exist absolutely, independent of an external observer, requires being a maximum of intrinsically irreducible cause-effect power of a specific form. [29]
  • Being, happening, causing, and the incoherence of ontological reductionism. If the criterion for existence is having maximally irreducible cause-effect power, the reductionist assumption that ultimately only “atoms” exist is incoherent. At issue is not methodological reductionism - the assumption that everything that exists is constituted of minimal irreducible entities – such as elementary particles (ignoring for the present purposes the complexities of modern physics). In fact, IIT’s criterion for existence – having maximally irreducible cause-effect power – is implicitly upheld for elementary particles: if a purported particle cannot be observed and manipulated, even in principle, there would be no reason to think it exists, and if it were not irreducible it would not be elementary. At issue instead is ontological reductionism, which holds that, ultimately, “only” atoms exist: all the higher-level or higher-order properties that a system may display are accounted for by how the atoms are connected and thus are ontologically irrelevant, as they would not bring any new “thing” into existence (Kim 2010).
To illustrate the appeal of ontological reductionism, consider the following scenario. Assume that a detailed model of a physical system such as the brain becomes available, that it is effectively deterministic, and that the background conditions are fixed. For simplicity ignore constitutive reduction and assume as elements neurons rather than atoms, though ultimately for the reductionist everything would reduce to micro-physical elements. Then, knowing the elementary mechanisms (the input/output connections and firing requirements of each neuron) and their present state, a neurophysiologist could predict, neuron by neuron, what the next state of the system will be. This predictability seems to support the reductionist intuitions that: (i) only first-order elements in a state really exist; (ii) the only events that really happen are first-order changes in the state of each neuron; (iii) only first-order effects on first-order elements are really responsible for events happening (Kim 2010).
However, if one abides by the very same criterion of existence - maximum irreducible cause-effect power – it emerges that all three intuitions are incoherent. (i) A reduction to first-order elements in a state entirely misses what actually exists at any given time, including consciousness. As a simple example, consider two systems: LR, which is constituted of four elements interconnected all to all; and L+R, which is constituted of two independent subsystems L and R, each constituted of two interconnected elements, but with no connections between L and R. Assume that the element states of the two systems are identical, say all ON. If we know the mechanisms and the state, we can predict the next state perfectly—say, both systems are in a limit cycle and all elements switch OFF, then ON again. However, assuming that both system LR and subsystems L and R each specify intrinsically irreducible conceptual structures, LR will support one consciousness and system L+R two separate ones. This is a fundamental difference in what exists intrinsically (being) and in the way it exists (essence), but it is not apparent when considering the state of first-order elements. (ii) When the state of the system changes, it is not just the states of the first-order elements that change, but an entire cause-effect structure, composed of multiple higher-order concepts superimposed over the same elements. In the previous example, one consciousness changes state when LR switches to OFF, two when L+R does so. (iii) If multiple superimposed events happen, there must be just as many causes, not only for each elementary event (a neuron turning ON), but also for higher-order events (two or more neurons turning ON). As an example, consider two elementary mechanisms, A and B, both of which are ON, receiving input from 4 sensors, all of which were ON. A fires if it receives an even number of ON inputs, B fires if >2 of its inputs are ON. Thus, the cause repertoire of A ON specifies “ON inputs even” and that of B ON specifies “ON inputs >2”. Now consider the second-order mechanism A,B, which is ON,ON. Its cause repertoire specifies “ON inputs = 4”) and, as can be shown by partitions, it is irreducible to that of A and B taken separately. Hence, there exist three cause repertoires superimposed on the same two elements, each of them specifying a different cause that is not reducible to other causes. [30]
The intuitions of ontological reductionism stem from the natural inclination to conflate “existing” with “being constituted” of a physical substrate, ultimately atoms. As argued by IIT, a physical substrate is constituted by elements that can be observed and manipulated with the available tools, through which we infer what actually exists – the cause-effect repertoires and structures specified by the elements that constitute a physical substrate. Once the distinction between existence and constitution becomes clear, so does the incoherence of ontological reductionism: aggregates constituted of atoms (macro-elements), as well as compositions of elements (systems), can satisfy the same criterion for existence as atoms – having maximally irreducible cause-effect power. Moreover, systems composed of elements having maximally irreducible cause-effect power on themselves (complexes) exist intrinsically as conceptual structures, for themselves rather than for an external observer/manipulator. Thus, connecting first-order elements in certain ways is far from ontologically innocent, as it truly brings new things into being, including conscious beings. [31]
  • Free will. As indicated above, according to IIT (i) what actually exists is an entire cause-effect structure, much more than the first-order cause-effect repertoires of atomic elements; (ii) when a conceptual structure changes much more happens than first-order events; and (iii) its changes are caused by much more than first-order causation. This view, together with the central identity of IIT, which says that an experience is a conceptual structure that is maximally irreducible intrinsically, has several implications for the notions of free will and responsibility (Tononi 2013).
First, for a choice to be conscious, a system’s cause-effect power must be exerted intrinsically - upon itself: the conceptual structure must be “causa sui.” In other words, a conscious choice must be caused by mechanisms intrinsic to a complex rather that by extrinsic factors. This requirement is in line with the established notion that, to be free, a choice must be autonomous - decided from within and not imposed from without.
Second, for a choice to be highly conscious, the conceptual structures that correspond to the experience of deliberating and deciding (“willing”) must be highly irreducible - they must be composed of many concepts, including a large number of higher-order ones. In other words, a conscious choice involves a large amount of cause-effect power and is definitely not reducible to first-order causes and effects. Hence, the reductionist assumption that ultimately “my neurons made me do it” is just as definitely incorrect.
Seen this way, a system that only exists extrinsically, such as a feed-forward network, is not “free” at all, but at the mercy of external inputs. In this case nothing exists from the intrinsic perspective - there is nothing it is like to be a feed-forward network. But if intrinsically there is nothing, it cannot cause anything either. There is only extrinsic causation - a machine “going through the motions” for the benefit of an external manipulator/observer. On the other hand, a system that exists intrinsically, but only minimally so - say two coupled elements that can only turn on and off together, achieving minimal values of \(\Phi\), may well be free, but it has minimal “will.” In other words, while its choices are free because they are determined intrinsically, very little is being determined - just two first-order concepts. By contrast, a complex that specifies a rich conceptual structure of high \(\Phi\) is both free and has high will: its choices are determined intrinsically and they involve a large amount of cause-effect power. That is to say, to have free will, one needs to be as free as possible from external causes, and as determined as possible by internal causes - the multitude of concepts that compose an experience. In short, more consciousness, more free will.
The claim that the more one’s choices are intrinsically determined, the more one has free will, may at first seem at odds with the widespread conviction that determinism and free will are incompatible. However, at issue is not determinism, but the combination of the extrinsic perspective and reductionism. Consider again the role of our neurons when we make a decision. The extrinsic perspective applied to neurons shows us that what each neuron does is determined extrinsically, by its inputs; hence neurons are not free, just like transistors are not. Moreover, ontological reductionism leads us to believe that ultimately all there is are neurons; hence none of us, being constituted of neurons, is free, just like a digital computer is not free. In this scenario, consciousness and conscious decisions are inevitably epiphenomenal: they merely “go along for the ride” but have no causal role to play, as the neurons do all the causal work (ref). If we look at the brain this way, it does not seem to be fundamentally different from any other machine, say a digital computer running a simulation, except that the elements that update their state are neurons rather than transistors. In both cases, we envision a machine “going through its motions”, which leaves no room for free will. As was argued above, however, there is much more to the neural substrate of consciousness than just neurons and their extrinsic determinants: if we adhere to maximally irreducible cause-effect power as the criterion for existence, what exists when we make a conscious choice is a rich conceptual structure, involving much more than first-order causation by individual neurons. Moreover, the cause-effect power is exerted intrinsically, rather than extrinsically: it is not the extrinsic inputs to each neurons that make things happen, but the conceptual structure acting upon itself. In summary, when I deliberate and make a decision, what exists and causes the decision is my own consciousness – nothing less and nothing more, and the decision is free because it has been brought about by intrinsic causes and effects. By contrast, when a digital simulation of my neurons unfolds, even if it leads to the same behavior, what exists are just individual transistors, whose individual choices are determined extrinsically: there is no consciousness, no intrinsic causation, and therefore no free will.
Finally, it is often suggested that one’s will can be free only if one might have acted otherwise – the so-called requirement for alternative possibilities. But according to IIT, determinism is the friend, not the foe of free will, since any indeterminism reduces cause-effect power and therefore reduces free will. Said otherwise, if I were to find myself in the same exact situation, I would want to choose in exactly the same way, since in this way the choice would not be left to chance, but would be fully determined by me - a “me” that includes the full richness of my consciousness – my understanding, my memories, and my values.
  • Meaning, matching, and the evolution of consciousness. In IIT, meaning is completely internalistic: it is specified through the interlocking of cause-effect repertoires of multiple concepts within a conceptual structure that is maximally irreducible intrinsically. In this view, the meaning of a concept depends on the context provided by the entire conceptual structure to which it belongs, and corresponds to how it constrains the overall ‘‘form’’ of the conceptual structure. Meaning is thus both self-generated, self-referential (internalistic) and holistic (Oizumi, Albantakis et al. 2014). While emphasizing the internalistic nature of concepts and meaning, IIT naturally recognizes that in the end most concepts owe their origin to the presence of regularities in the environment, to which they ultimately must refer, albeit only indirectly. This is because the mechanisms specifying the concepts have themselves been honed under selective pressure from the environment during evolution, development, and learning. Moreover, the relationship between the conceptual structure specified by a complex of elements, such as a brain, and the environment to which it is adapted, is not one of ‘‘information processing’’, but rather one of ‘‘matching’’ between internal and external cause-effect structures. Matching can be quantified as the distance between the set of conceptual structures specified when a system interacts with its typical environment and those generated when it is exposed to a structureless (‘‘scrambled’’) version of it. The notion of matching is being investigated through neurophysiological experiments and computer simulations. Moreover, IIT predicts that adaptation to an environment should lead to an increase in matching and thereby to an increase in consciousness. This prediction can be investigated by evolving simulated agents in virtual environments (‘‘animats’’) and applying measures of information integration (Joshi, Tononi et al. 2013, Albantakis et al. 2014). Initial results suggest that, over the course of the animats’ adaptation, integrated information increases, and that its increase depends on the complexity of the environment, given constraints on the number of elements and connections within the animats.


  • Axioms: Self-evident truths about consciousness: experience exists, independent of external observers (intrinsic existence); it is structured (composition); it is the particular way it is (information); it is irreducible to non-interdependent components (integration); and it is definite (exclusion).
  • Postulates: Hypotheses about the physical substrate of consciousness, derived from the axioms: to support consciousness, a physical systems must have cause-effect power upon itself (intrinsic existence); be composed of parts that have cause-effect power on the system (composition); specify a particular cause-effect structure (information); which must be irreducible (integration); and maximally so (exclusion).
  • Physical system: A collection of elements at a certain spatio-temporal grain, in a certain state.
Mathematically, a physical system constituted of \(n \geq 2\) elements can be represented as a discrete time random vector of size \(n\), \(\mathbf{X}_t = \{X_{1,t}, X_{2,t}, …, X_{n,t}\}\) with observed state at time t, \(\mathbf{x}_t = (x_{1,t}, x_{2,t}, \ldots, x_{n,t}) \in \Omega_X\) where \((\Omega_X, d)\) is the metric space of all possible states of \(\mathbf{X}_t\). For example, if each \(X_{i,t}\) is a binary variable, then \(\Omega = \{0,1\}^n\) and a Hamming metric can be used to define the distance between states.
  • Element: An elementary constituent of a physical system, for example a neuron in the brain, or a logic gate in a computer, which has at least two internal states, inputs that can influence these states, and outputs that in turn are influenced by these states.
Mathematically, an element can be represented as a discrete variable \(X_{i,t}\) which at time t takes one of at least two possible states.
  • Mechanism: any composition of elements constituting a physical system having a maximally irreducible cause-effect repertoire over a subset of elements within the system (its purview).
Let \(\mathbb{P}(\mathbf{X}_t)\) be the power set of all possible subsets of a physical system \(X_t\). In what follows, let \(Y_t \in \mathbb{P}(\mathbf{X}_t)\) be a candidate mechanism and \(\mathbf{Z}_{t-1} \in \mathbb{P}(\mathbf{X}_{t-1})\) its past purview and \(\mathbf{Z}_{t+1} \in \mathbb{P}(\mathbf{X}_{t+1})\) its future purview. Note that the past and future purviews may be different for a given mechanism. Furthermore, for any subset (mechanism or purview) \(\mathbf{Y}_t \in \mathbb{P}(\mathbf{X}_t)\) define the complement of \(\mathbf{Y}_t\) in \(\mathbf{X}_t\), \[\mathbf{Y}^c_t = \mathbf{X}_t \setminus \mathbf{Y}_t. \]
  • Cause-effect repertoire: The probability distribution of possible states of past and future purviews \(\mathbf{Z}_{t\pm 1}\), as specified by a candidate mechanism \(\mathbf{Y}_t\) in its current state. It can be represented as a point in cause-effect space. It specifies in which particular way the mechanism gives form to (“informs”) the space of possible states of the system.
Mathematically, the cause repertoire of an element of the mechanism in a state \(Y_{i,t}=y_{i,t}\) over the past purview \(\mathbf{Z}_{t-1}\) is the probability function for the past state of the purview conditioned on the current state of the candidate mechanism, evaluated by perturbing the system into all possible states, \[p_{\text{cause}}(\mathbf{z}_{t-1}|y_{i,t}) \equiv \frac{\sum_{\mathbf{z^c} \in \Omega_{Z^c}}p\big(y_{i,t}~|~do(\mathbf{z}_{t-1},\mathbf{z^c})\big)}{\sum_{\mathbf{z} \in \Omega_{Z}}\sum_{\mathbf{z^c} \in \Omega_{Z^c}} p\big(y_{i,t}~|~do(\mathbf{z},\mathbf{z^c})\big)}, \quad \mathbf{z}_{t-1} \in \Omega_{Z_{t-1}}. \] The inputs of every element are perturbed independently using virtual elements to account for the effects of common input. The resulting cause repertoire for the entire candidate mechanism has the form \[ p_{\text{cause}}\big(\mathbf{z}_{t-1}|\mathbf{y}_t\big) \equiv \frac{1}{K}\prod_{i=1}^{|\mathbf{y}_t|} p_{cause}\big(\mathbf{z}_{t-1}~|~y_{i,t}\big), \quad \mathbf{z}_{t-1} \in \Omega_{Z_{t-1}}, \] where \(K\) is the normalization constant which ensures the repertoire sums to one, \[ K = \sum_{\mathbf{z} \in \Omega_{Z_{t-1}}} \prod_{i=1}^{|\mathbf{y}_t|} p_{\text{cause}}\big(\mathbf{z}~|~y_{i,t}\big).\]
Similarly, the effect-repertoire of the candidate mechanism in a state \(\mathbf{Y}_{t} = \mathbf{y}_t\) over an element of the future purview \(Z_{t+1,i}\in \mathbf{Z}_{t+1}\) is given by \[ p_{\text{effect}}\big(z_{t+1,i}|\mathbf{y}_t\big) \equiv \frac{1}{|\Omega_{Y^c}|}\sum_{\mathbf{y^c} \in \Omega_{Y^c}}p\big(z_{t+1,i}~|~do(\mathbf{y}_t,\mathbf{y^c})\big), \quad z_{t+1,i} \in \Omega_{Z_i} \] The effect repertoire for a candidate mechanism in a state \(\mathbf{Y}_t=\mathbf{y}_t\) over a future purview \(\mathbf{Z}_{t+1}\) is then given by, \[ p_{\text{effect}}(\mathbf{z}_{t+1}|\mathbf{y}_t) \equiv \prod_{i=1}^{|\mathbf{z}_{t+1}|} p_{\text{effect}}\big(z_{t+1,i}|\mathbf{y}_t\big). \] Note that in the effect repertoire the sum of probabilities is equal to one, so normalization is not required.
Together, these two functions define the cause-effect repertoire of a subset of elements over a purview, \[ CER(\mathbf{y}_t, \mathbf{Z}_{t\pm 1}) = \{p_{\text{cause}}(\mathbf{z}_{t-1}|\mathbf{y}_t), ~p_{\text{effect}}(\mathbf{z}_{t+1}|\mathbf{y}_t) \}. \] These probability distributions can be obtained by perturbations of the system over all possible initial states[32] . For additional details see Oizumi, Albantakis et al. 2014.
  • Cause-effect structure: The set of all cause-effect repertoires in cause-effect space as informed by all the mechanisms of a physical system in a state.
Denoting \(\mathbb{M}(X_t=x_t) \equiv \mathbb{M}(\mathbf{x}_t)\) as the set of all mechanisms of a system \(\mathbf{X}_t\) in state \(\mathbf{x}_t\), the cause-effect structure \[CES(\mathbf{x}_t) = \{ CER(\mathbf{y}_t, \mathbf{Z}_{t\pm 1}) ~|~ \forall~(\mathbf{y}_t,\mathbf{Z}_{t\pm 1}) \in \mathbb{M}(\mathbf{x}_t) \}. \]
  • Cause-effect space: A high dimensional space of probabilities with one axis for each possible past and future state of the system.
Mathematically, cause-effect space is a metric space \(\big(CES(\mathbf{x}_t), D\big)\) where each cause-effect repertoire in the cause-effect structure is a point and the distance () between points is an extension of the earth mover’s distance based on the underlying metric between states (for example, the Hamming distance), \[D(\mathbf{y}_t^{(1)},~\mathbf{y}_t^{(2)}) = \operatorname{emd}\left(p_{\text{cause}}^{(1)},~p_{\text{cause}}^{(2)},~d\right) + \operatorname{emd}\left(p_{\text{effect}}^{(1)},~p_{\text{effect}}^{(2)},~d\right) \]
  • Cause-Effect Information (cei): How much the cause-effect repertoire specified by a mechanism in a state informs the possible past and future states of a system.
Cause-information (ci) and effect-information (ei) of a candidate mechanism \(\mathbf{Y}_t\) in a state \(\mathbf{y}_t\) over the purviews \(\mathbf{Z}_{t\pm 1}\) are defined as the distance between cause (effect) repertoire and a corresponding unconstrained probability distributions, \[ ci(\mathbf{y}_t,~\mathbf{Z}_{t-1}) = \operatorname{emd}\big(p_{\text{cause}}(\mathbf{Z}_{t-1}|\mathbf{y}_t),~ p_{\text{cause}}(\mathbf{Z}_{t-1}|\emptyset),~d\big) \] \[ ei(\mathbf{y}_t,\mathbf{Z}_{t+1}) = \operatorname{emd}\big(p_{\text{effect}}(\mathbf{Z}_{t+1}|\mathbf{y}_t),~ p_{\text{effect}}(\mathbf{Z}_{t+1}|\emptyset),~d\big), \]
The cause-effect information of \(\mathbf{Y}_t=\mathbf{y}_t\) over the purviews \(Z_{t\pm 1}\) is the minimum of the ci and ei, \[ cei(\mathbf{y}_t,\mathbf{Z}_{t\pm 1}) = \min \big( ci(\mathbf{y}_t,\mathbf{Z}_{t-1}), ~ei(\mathbf{y}_t,\mathbf{Z}_{t+1}) \big). \] A subset of elements in a state \(\mathbf{Y}_t=\mathbf{y}_t\) is said to have cause-effect power over their purviews \(Z_{t \pm 1}\) if \(cei(\mathbf{y}_t,\mathbf{Z}_t) > 0\).
  • Integrated information (\(\varphi\) or “small” phi): The distance between the cause-effect repertoire specified by a mechanism and that specified by its (minimal) parts. Thus \(\varphi\) measures the irreducibility of a cause-effect repertoire (integration at the level of individual mechanisms):
A partition of a mechanism \(\mathbf{Y}_t\) over a purview \(\mathbf{Z}\) is a set \(P = \{\mathbf{Y}_{1,t}, \mathbf{Y}_{2,t}, \mathbf{Z}_{1,t}, \mathbf{Z}_{2,t}\}\) such that \(\{\mathbf{Y}_{1,t},\mathbf{Y}_{2,t}\}\) is a partition of \(\mathbf{Y}_t\), \(\{\mathbf{Z}_{1}, \mathbf{Z}_{2}\}\) is a partition of \(\mathbf{Z}\), \((\mathbf{Y}_{1,t}\cup \mathbf{Z}_{1}) \neq \emptyset\) and \((\mathbf{Y}_{2,t}\cup \mathbf{Z}_{2})\neq \emptyset\).
The cause (effect) information specified by the whole system above and beyond that specified by the partitioned system is the distance between the corresponding probability distributions, assuming \(\mathbf{Z}_{1}|\mathbf{Y}_{1,t}\) and \(\mathbf{Z}_{2}|\mathbf{Y}_{2,t}\) are independent (the connections between them have been 'cut' and injected with independent noise sources): \[ \varphi_{\text{cause}}(\mathbf{y}_t,\mathbf{Z}_{t-1}, P) = \operatorname{emd}\big(p_{\text{cause}}(\mathbf{Z}_{t-1}|\mathbf{y}_t),~ p_{\text{cause}}(\mathbf{Z}_{1,t-1}|\mathbf{y}_{1,t}) \otimes p_{\text{cause}}(\mathbf{Z}_{2,t-1}|\mathbf{y}_{2,t}),~d\big) \] \[ \varphi_{\text{effect}}(\mathbf{y}_t,\mathbf{Z}_{t+1}, P) = \operatorname{emd}\big(p_{\text{effect}}(\mathbf{Z}_{t+1}|\mathbf{y}_t),~ p(\mathbf{Z}_{1,t+1}|\mathbf{y}_{1,t}) \otimes p(\mathbf{Z}_{2,t+1}|\mathbf{y}_{2,t}),~d\big). \] The integrated cause (effect) information of \(\mathbf{Y}_t\) over \(\mathbf{Z}_t\) is the minimum over all possible partitions (minimum information partition MIP): \[ \varphi_{\text{cause}}^{MIP}(\mathbf{y}_t,\mathbf{Z}_{t-1}) = \min_P \varphi_{\text{cause}}(\mathbf{y}_t,\mathbf{Z}_{t-1}, P), \] \[ \varphi_{\text{effect}}^{MIP}(\mathbf{y}_t,\mathbf{Z}_{t+1}) = \min_P \varphi_{\text{effect}}(\mathbf{y}_t,\mathbf{Z}_{t+1}, P). \] The integrated information of \(\mathbf{Y}_t=\mathbf{y}_t\) over the purviews \(Z_{t\pm 1}\) is then: \[ \varphi(\mathbf{y}_t,\mathbf{Z}_{t\pm 1}) = \min (\varphi_{\text{cause}}^{MIP},~ \varphi_{\text{effect}}^{MIP}). \]
  • Minimum information partition (MIP): The partition that makes the least difference (the minimum “difference” partition).
  • Maximally irreducible cause-effect repertoire (MICE): the cause-effect repertoire specified by a mechanism over the purviews for which integrated information is maximal \(\varphi = \varphi^{\textrm{max}}\): \[ MICE(\mathbf{y}_t) = CER(\mathbf{y}_t, \mathbf{Z}_{t\pm 1}), ~\text{such that for any other} ~ \mathbf{Z}_{t\pm 1}^* \in \mathbb{P}(\mathbf{X}_{t \pm 1}), \quad \varphi(\mathbf{y}_t,\mathbf{Z}_{t\pm 1}^*) \leq \varphi^{\textrm{max}}(\mathbf{y}_t,\mathbf{Z}_{t\pm 1}). \]
  • Concept or quale sensu stricto (q): A mechanism and the maximally irreducible cause-effect repertoire it specifies, with the associated value of integrated information, \[ q(\mathbf{y}_t) = \{ MICE(\mathbf{y}_t), \varphi^{\textrm{max}}(\mathbf{y}_t)\} \].
  • Conceptual structure (CS): The set of all concepts specified by all the mechanisms of a system with their associated \(\varphi^{\textrm{max}}\) values. The corresponding maximally irreducible cause-effect repertoires can be plotted as a constellation of “stars” of size \(\varphi^{\textrm{max}}\) in cause-effect space.
Mathematically, a conceptual structure is the set of concepts specified by a physical system in a state \(\mathbf{X}_t=\mathbf{x}_t\): \[ CS(\mathbf{x}_t) = \{q(\mathbf{y}_t) ~|~ \mathbf{y}_t \in \mathbb{M}(\mathbf{x}_t)\}. \]
When taking the \(\operatorname{emd}\) distance between conceptual structures, the \(\varphi^{\textrm{max}}\) values are the earth to be moved, and the distance is between the cause-effect repertoires.
  • Conceptual information: The distance between the conceptual structure specified by a physical system in a state \(\mathbf{X}_t=\mathbf{x}_t\) and the conceptual structure specified by a system with no mechanisms: \[ CI(\mathbf{x}_t) = \operatorname{emd}\big(CS(x_t),~\emptyset,~ D\big). \]
  • Integrated (conceptual) information (\(\Phi\) or “big PHI): The distance between the conceptual structure specified by a physical system in a state \(\mathbf{X}_t=\mathbf{x}_t\) and that specified by its (minimal) parts. Thus, \(\Phi\) measures the intrinsic irreducibility of a conceptual structure (intrinsic integration at the system level): \[ \Phi(\mathbf{x}_t) = \operatorname{emd}\big(CS(\mathbf{x}_t),~CS(\mathbf{x}_t^{MIP}),~D\big). \]
  • Maximally irreducible conceptual structure (MICS), also conceptual structure tout court, or quale sensu lato (Q): The conceptual structure specified by the set of elements for which integrated (conceptual) information is maximal \((\Phi = \Phi^{\textrm{max}})\) \[ \Phi^{\textrm{max}}(\mathbf{x}_t) > 0; \text{ for any other }\mathbf{X}_t^* \text{ such that } (\mathbf{X}_t^* \cap \mathbf{X}_t)\neq \emptyset, ~\Phi(\mathbf{x}_t^*) \leq \Phi^{\textrm{max}}(\mathbf{x}_t). \]
  • Complex: A physical system that specifies a quale sensu lato – a conceptual structure that is a maximum of intrinsic irreducibility \(\Phi\) over elements, spatial, and temporal grain. Only a complex exists as an entity from its own intrinsic perspective.


  1. ^ Note that consciousness can include the feeling of reflecting upon experience itself (reflexive or higher-order consciousness). Note also that dreams, too, are conscious – they are experiences, though they are unrelated to the current environment. [return]
  2. ^ The fact that my experience has the phenomenal distinctions it has, rather than less (a subset) or more (a superset) implies that it cannot be less or more (within the only world that is accessible to me). A way to see this is to consider that otherwise there would not be a sufficient reason for my experience being what it is, rather than being a subset or superset. Moreover, there would be a regress: if my experience could be a subset of what it is, then it could have been just as well a subset of the subset, down to nothing (or a superset, up to everything). [return]
  3. ^ While the set of axioms was formulated after extensive consideration, IIT is a developing theoretical framework, and it is important to continue questioning whether the set may be incomplete. For example, various other properties of consciousness could be considered, some of which have been highlighted in the literature, such as subject-object distinction (an experience may require a subject and an object); change (an experience usually transitions into another); time (an experience usually has a before and an after); space (experience typically takes place in some spatial frame), intentionality (experiences usually refer to something in the world); a figure-ground distinction; situatedness (an experience is often referred to a time and place); causality (experience offers the opportunity for action); affect (experience is often colored by some mood); self (many experiences include a reference to one’s body or even autobiographical self), and so on. However, several arguments can be made against adding such properties to the list. For example, one can argue that assuming the existence of a subject of the experience, in addition to the experience itself, is unnecessary; that an experience may stay the same without vanishing; that some experiences may seem timeless; that there may be experiences lacking spatial dimensions, as in some dreams, or situatedness (disoriented patients), or figure-ground distinctions (Ganzfeld); that some experiences, such as boredom, may not refer to something in the world; that causality, change, and time may be derived from the existence axiom, and so on. [return]
  4. ^ Experiences change in ways that are clearly non-random. A good inference is to assume that the regularities of one’s experience are due to the existence of a physical world, which contains many objects that can be observed and manipulated by us. This inference is certainly consistent with common sense and, with much greater detail and predictive power, with the worldview developed by science. Over time, this model has been extended to accommodate the observations that our own consciousness vanishes in deep sleep, that it can be influenced by drugs, and that a special object within the physical world - the brain- has a privileged connection to the quantity and quality of experience. These are just another set of regularities that need to be explained parsimoniously and coherently. In comparison, postulating a single giant complex to account for one’s experience seems much less “lovely” (Lipton 2004), if not unfeasible. Finally, the solipsistic alternative that the regularities of one’s experience are purely due to chance is neither lovely nor in line with the principle of sufficient reason.[return]
  5. ^ This is the exact opposite of what is usually done: start from some plausible physical substrate, typically some interconnected set of neurons in the brain, and postulate that it would somehow give rise to experience. Schopenhauer saw this quite well: “Materialism … tries to find the first and simplest state of matter, and then to develop all the others from it, ascending from mere mechanism to chemistry, to polarity, to the vegetable and the animal kingdoms. Supposing this were successful, the last link of the chain would be animal sensibility, that is to say knowledge [the German text says “Erkenntniss,” which in this context may be better rendered by “sentience”]; which, in consequence, would then appear as a mere modification of matter, a state of matter produced by causality. Now if we had followed materialism thus far with clear notions, then, having reached its highest point, we should experience a sudden fit of the inextinguishable laughter of the Olympians. As though waking from a dream, we should all at once become aware that its final result, produced so laboriously, namely knowledge [sentience] was already presupposed as the indispensable condition at the very first starting-point, at mere matter. With this we imagined that we thought of matter, but in fact we had thought of nothing but the subject that represents matter, the eye that sees it, the hand that feels it, the understanding that knows it. Thus the tremendous petitio principii … Materialism is therefore the attempt to explain what is directly given to us from what is given indirectly.” (Schopenhauer, The World as Will and Representation, translated by E.F.J. Payne, p26–27). A similar issue arises with non-physical starting points. For example, one may begin from some intriguing mathematical property, such as incomputability, incompressibility, or some kind of complexity, and postulate that it may somehow be associated with experience. In short, one cannot “squeeze” consciousness out of matter – whether the brain or other substrates - just as one cannot conjure up consciousness from mathematics. [return]
  6. ^ An element in a state is therefore an elementary computing mechanism, in that it can change its internal state based on its inputs and communicate this change through its outputs. [return]
  7. ^ In Plato’s Sophist, the Eleatic Stranger says: “I suggest that everything which possesses any power of any kind, either to produce a change in anything of any nature or to be affected even in the least degree by the slightest cause, though it be only on one occasion, has real existence. For I set up as a definition which defines being, that it is nothing else but power.” (Plato, Sophist, 247 D,E, translated by Harold N. Fowler, Loeb). This definition of being has been strangely neglected over the centuries, though it was briefly alluded to by Samuel Alexander (hence “Alexander’s dictum”). In IIT, the Eleatic definition is augmented by adding the requirement that, in order to exist, something must have cause AND effect power (rather than cause OR effect); that causation implies a repertoire of alternatives; that it must be irreducible (integration); maximally so (exclusion); and that it necessarily exists in a particular way (information). Moreover, IIT distinguishes between extrinsic existence (having maximally irreducible cause-effect power from the perspective of an external observer/manipulator) and intrinsic existence (having maximally irreducible cause-effect power on oneself). [return]
  8. ^ While the notion of information in IIT is faithful to the etymology of the term (giving “form”), it differs substantially from that in communication theory or in common language (Oizumi, Albantakis et al. 2014, supplementary material S3). Also, in IIT information and causation go together: there cannot be observer-independent information without mechanisms having cause-effect power, and there cannot be causation without a repertoire of alternatives. [return]
  9. ^ Cf. Leibniz: “Je tiens pour un axiome cette proposition identique qui n'est diversifiée que par l'accent: que ce qui n'est pas véritablement un être n'est pas non plus véritablement un être. (I consider as an axiom this self-identical proposition, diversified by emphasis only: that which is not truly one being is not truly a being at all.)" (Leibniz 1988, April 30, 1687, p. 165). [return]
  10. ^ In IIT, an irreducible mechanism (composed of two or more elements) underpins the notion of binding (Treisman and Gelade 1980). For example, phenomenologically, the high-order distinction “blue book” binds the first-order distinction “book” with that of “blue”. Neurophysiologically, IIT predicts that the joint causes and the joint effects of a neuronal element standing for “book” and one standing for “blue” go above and beyond their separate causes and effects: for instance, if the two neural elements firing synchronously have stronger post-synaptic effects, or if they have a joint target that only turns on when their firing rates are both high, and so on. [return]
  11. ^ A system that supports a conceptual structure that is maximally irreducible intrinsically has maximum cause-effect power upon itself: it exists intrinsically and can be said to “specify” information. By contrast, a system that only supports a maximally irreducible cause-effect structure, as established using bidirectional partitions (e.g. a purely feed-forward system), has cause-effect power only for an external observer/manipulator: it exists extrinsically and can be said to “process” information. [return]
  12. ^ Another notable property of consciousness is that experience changes all the time. However, since “timeless” experiences can occur, at least “for a short time”, its is arguable whether change should constitute an axiom/postulaten (see above). [return]
  13. ^ To recapitulate, the possible existence of a conceptual structure over a particular set of elements and spatio-temporal grain is quantified by irreducibility across the minimum information partition: something can exist only to the extent that it is irreducible (integration). Maximum irreducibility determines which conceptual structure actually exists intrinsically, out of many that are possible over overlapping elements and spatio-temporal grains (exclusion). Both postulates can be related to the notion of “best causal model”—the simplest “physical” (effective) structure that can account for all the data (under observations/manipulations), in the spirit of Solomonoff’s principle of inductive inference (Solomonoff 1964): maximal irreducibility identifies how an effective structure is best decomposed into wholes and their residual interactions. It should be noted, however, that Solomonoff’s principle is formulated with respect to effective procedures (algorithms: what something does), rather than effective structures (what something is). [return]
  14. ^ The distinction between existence and information bears some resemblance to the Scholastic distinction of Avicenna and Aquinas between being (that something exists) and essence (what something is like) (Bobik 1965). However, some differences are worth pointing out. In IIT (intrinsic) existence says that a system of elements has some cause-effect power (upon itself); information (essence) specifies all the particular cause-effect powers it has (upon itself). In other words, existence answers the question: does the system have cause-effect power? And information (essence) answers the question: exactly what kind of cause effect-powers does it have? Also, in IIT the final criterion for existence (both for experience at large and for any phenomenal distinction within an experience) is given by the maximum value of integrated information (\(\Phi^{\textrm{max}}\) or \(\varphi^{\textrm{max}}\), respectively). Since these values depend on the irreducibility of the full cause-effect structure and its cause-effect repertoires, the quantity of being (existence) depends on its quality (information or essence). Finally, by the exclusion postulate, what actually exists is only what lays the strongest claim to existence, by being maximally irreducible over elements or spatio-temporal grain. Hence, the exclusion postulate might be called the “maximum existence principle.” By the same principle, the existence of the system (\(\Phi^{\textrm{max}}\)) always trumps the existence of its subsets (\(\varphi^{\textrm{max}}\)): if there are alternative assignments of cause-effect repertoires (purviews) over subsets of elements within a complex, the winner is the assignment that supports the conceptual structure having \(\Phi^{\textrm{max}}\). [return]
  15. ^ A way to visualize the meaning of the axioms/postulates is to apply them to an everyday object, such as a light bulb. Existence: The light bulb has cause-effect power (albeit only extrinsically), since one can affect it (screw it in) and it can have effects (produce light). Composition: It is composed of multiple parts (screw base, glass bulb, filament, wire, stem, etc.), all of which have cause-effect power alone or in combination. Information: It is what it is, meaning it has the “form” of a light bulb, thereby differing from a large number of other objects (such as a fan, a chair, a table, a shoe, and so on). Integration: It cannot be subdivided without loss into causally non-interdependent parts (if you split it in two, it will not work). Exclusion: It has borders - it is neither less (just a filament) nor more (a chandelier) than what it is. Of course, while this analogy may be illuminating, it is also potentially misleading, since a light bulb exists extrinsically (it is an extrinsic “form” in space-time), whereas an experience exists intrinsically (it is an intrinsic “form” in cause-effect space). [return]
  16. ^ An important question is whether the postulates of IIT are both necessary and sufficient to specify a single, unique form in cause-effect space and whether this could be proven mathematically. That is, whether in this respect they constitute a complete, independent, and consistent set. Other (non-essential) properties of experience could then be related to particular features of certain forms. [return]
  17. ^ The identity between experiences and conceptual structures that are maximally irreducible intrinsically implies that the presence or absence of consciousness, its particular quality, and the similarities or differences between experiences, should map directly onto properties of the corresponding conceptual structures, but may be less intelligible based only on the state of the components. As an example, both coma and generalized seizures can be associated with a loss of consciousness. IIT predicts in both cases a similar breakdown of the conceptual structures normally generated by the awake brain. However, the state of the cerebral cortex is radically different in the two conditions—neuronal silence in coma and intense neuronal firing during the initial phase of a generalized seizure. [return]
  18. ^ In other words, that a complex of elements in a state “informs” the cause-effect space of possibilities (past-future states) in a particular way (specifies a conceptual structure), is a real (as opposed to virtual), actual (as opposed to potential), intrinsic property within its causal neighborhood (as opposed to relational), just as it is a real, actual, intrinsic property of a mass to bend space-time in a particular way. [return]
  19. ^ According to IIT, concepts can be low- and high-order, depending on how many elements are involved. First-order concepts can be positive and negative, depending whether the elements are ON or OFF. Moreover, concepts can be low- and high-level, depending on where the relevant elements are in a hierarchy of stages that specify disjunctions (OR) of conjunctions (AND) over other elements. Disjunctions of conjunction specify concepts—for example a “face”—that are invariant over transformation of location and size in the visual field. It is important to realize that, while we are used to summarize what we see by referring to a few positive high-level concepts (“I see the letters M, A, N, Y”), we would not see what we see without the contribution of a large number of other concepts—positive and negative, low- and high-order, low- and high-level—that make the image what it is and different from countless others. [return]
  20. ^The structure of experience appears to be much richer than what is reported explicitly (either in words or actions, Block 2005), although others insist that consciousness should only be identified with what is accessible explicitly (Cohen and Dennett 2011). IIT emphasizes that the set of elements (neurons) that specify a particular concept within a conceptual structure may be more or less difficult to access from within the complex. First-order concepts, specified by individual neurons, or higher-order concepts specified by several nearby neurons that receive shared inputs, should be easily accessible, say through back-connections converging to a single place. This is especially the case for high-level, categorical concepts such as faces, letters, and so on. By contrast, higher-order concepts that are widely distributed within the complex should be much harder to access. This is especially the case for low-level, spatial concepts that specify intricate spatial arrangements (say, those triggered by a painting by Jackson Pollock). In any case, the concepts that can be accessed and communicated to an external observer at any given time are a minimal subset of the entire set of concepts that compose the quale sensu lato - positive and negative, low-level and high-level, low-order and high-order, nor can one easily communicate the relationships among them (distance in cause-effect space) that give each experience its particular meaning. [return]
  21. ^ There is a relationship between the number of concepts that make up a typical conceptual structure specified by a complex of n elements (up to \(2^{n-1}\) concepts) and the number and dissimilarity of different conceptual structures that can be supported by states of the same complex (up to \(2^n\) states). Hence, estimating the number of different states available to an integrated neural system (neurophysiological differentiation) should also provide an estimate of the number of concepts of a typical conceptual structure and thereby, indirectly, of \(\Phi\) (assuming that the number of concepts is proportional to the irreducibility of a conceptual structure). [return]
  22. ^ According to IIT, an experience is identical with a maximally irreducible conceptual structure specified over itself by a complex of elements in a state, and it exists at a discrete interval of time at which cause-effect power reaches a maximum. It is important to establish how this relates to the dynamics of the elements of the complex at a faster time scale and to the temporal evolution of the states of the complex (Barrett and Seth 2011). It should be noted, however, that a complex in a state may specify a conceptual structure even when it is dynamically at a fixed point. Also worth establishing is how sudden splits in conceptual structures—as when a single maximum of integrated information is supplanted by two—are reflected in the dynamics of a system. [return]
  23. ^ Note that the inferences listed below are indeed consequences of the theory, not premises. In other words, they are derived from the postulates, hence ultimately from the axioms that are meant to capture the essential properties of experience. A common temptation is to go beyond phenomenology (or even ignore it) and rely on some intuitions about the physical world, say that systems that are simple to describe (grids of logic gates) or are mere aggregates (rocks or crowds) should not generate consciousness, and then translate such intuitions into requirements for consciousness. [return]
  24. ^ Exclusion does not prevent the “nesting” of conscious entities over similar spatial locations, as long as there is no overlap of the cause-effect repertoires of the respective mechanisms. For example, a mitochondrion within a neuron may specify a small conceptual structure that does not causally overlap with that specified by the neuron itself together with the other neuronal elements of a complex. [return]
  25. ^ One can conceive of two identical physical systems, one with and one without consciousness—a philosophical zombie (Chalmers 1996). Indeed, a common initial reaction to IIT goes along similar lines: one can conceive of a system that satisfies all the postulates of IIT – having maximally irreducible cause-effect power upon itself – but is completely unconscious. Indeed, from an extrinsic perspective there does not seem to be a compelling reason why such a system would have to be conscious. Which is why it bears recalling that IIT does not start from an extrinsic perspective, by considering physical systems having certain properties, such as a sufficient degree of \(\Phi\), and then postulating that they might be conscious. Instead, IIT starts from the intrinsic perspective – from experience itself and its phenomenal properties, and from there it goes on to postulate the existence of a physical world of elements in a state having properties that can account for those of experience itself. Thus the phenomenon of experience, which is immediately and intrinsically real, comes first, rather than being postulated as an epiphenomenon of something real. Moreover, if the postulated identity between experience and conceptual structure that are maximally irreducible intrinsically is true, a system of elements in a state that specifies such a conceptual structure has the corresponding experience necessarily and cannot be a zombie. [return]
  26. ^ Since macro-elements yielding the conceptual structure with highest \(\Phi\) may be realized by different micro-level constituents—for example, different isotopes that make no difference to the macro-level conceptual structure. [return]
  27. ^ Briefly, the explanation is that perturbing the system at a macro-level by imposing a uniform distribution on macro-states may yield a more deterministic or specific cause-effect structure than doing so at a micro-level: in other words, the system makes more of a difference to itself as a set of macro-elements. Of course, the uniform distribution of macro-states imposed for perturbations at the macro-level is equivalent to a particular distribution of micro-states, and vice-versa, but if the macro has more cause-effect power than the micro, one should say that the macro subsumes the micro, rather than supervening on it. [return]
  28. ^ It is intriguing to consider to what extent the physical world has intrinsic existence (cause-effect power from its own perspective—in and of itself) in addition to extrinsic existence (cause-effect power from the perspective of an observer who can perform interventions on it and sample the results). For example, if the world did not contain any elements with unidirectional causal links at the micro-physical level, some amount of consciousness would be pervasive. Another interesting question is whether ultimately all physical properties can be considered in terms of maximally irreducible cause-effect power of a specific kind, with no further “categorical” substrate.[return]
  29. ^ Of course, while the conceptual structure with highest \(\Phi\) must be inferred empirically by using observations, manipulations, and partitions on a system’s elements at multiple spatio-temporal scales, from its intrinsic perspective the system does not need to infer anything: it just exists ‘’as’’ the conceptual structure it specifies.[return]
  30. ^ Reduction to first-order elements also offers little in terms of understanding what exists extrinsically - what a system is, does, and why it does so. Understanding presupposes a conscious subject who knows not only what will happen next in a particular condition, but also what could happen in different conditions, and who can do so for all subsets of a system. For example, to understand which function is performed by a certain neuron in the brain, one needs to know for which equivalence class of stimuli the neuron would turn on (say, a face irrespective of where it is located in the visual field). And to understand whether some additional function is performed by subsets of multiple elements, one needs to know whether their joint equivalence class is irreducible to their individual ones. Note that irreducibility must be assessed by a “physical” partition of a system (“noising” connections), not by a “mathematical” factorization (showing that the joint cause repertoire is obtained by the product of the two separate cause repertoires). [return]
  31. ^ On the other hand, a “gerrymandered” aggregate of atoms (a haphazard collections of non-interacting elements) does not exist either intrinsically or extrinsically, because it has no irreducible cause-effect power. [return]
  32. ^ The probability distributions which form the cause-effect repertoire are determined by a perturbational analysis of the system. To assess the cause-effect power of a mechanism, a systematic intervention is used set the system into all possible states, similar to (Pearl 2000). However, there are some differences with respect to Pearl’s interventional calculus. For example, perturbations are not restricted to directed acyclic graphs (which would not be integrated). Moreover, perturbations on elements of the system outside the purview of a higher-order mechanism, which may provide common input to its components, are performed independently on each output (essentially splitting the common input into independent virtual elements) (Oizumi, Albantakis et al, 2014). [return]


  • Albantakis, L., A. Hintze, C. Koch, C. Adami and G. Tononi (2014). “Evolution of integrated causal structures in animats exposed to environments of increasing complexity.” PLoS Comput Biol 10(1371): e1003966.
  • Baars, B. (1988). A Cognitive Theory of Consciousness, Cambridge University Press.
  • Balduzzi, D. and G. Tononi (2008). “Integrated information in discrete dynamical systems: motivation and theoretical framework” PLoS Comput Biol 4: e1000091.
  • Barrett, A.B. and A.K. Seth (2011). “Practical measures of integrated information for time-series data. PLoS Comput. Biol. 7(1): e1001052.
  • Barrett, A.B. (2014). “An integration of integrated information theory with fundamental physics.” Front. Psychol. 5(63).
  • Block, N (2005). “Two neural correlates of consciousness.” Trends Cogn Sci 9(2): 46-52.
  • Bobik, J. T. (1965). Aquinas on being and essence: a translation and interpretation. Notre Dame, Ind., University of Notre Dame Press.
  • Casali, A. G., O. Gosseries, M. Rosanova, M. Boly, S. Sarasso, K. R. Casali, S. Casarotto, M.-A. Bruno, S. Laureys, G. Tononi and M. Massimini (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Sci Transl Med. 5: 198ra105–198ra105.
  • Chalmers, D. J. (1996). The conscious mind: in search of a fundamental theory. New York, Oxford University Press.
  • Cohen, M. A. and D. C. Dennett (2011). “Consciousness cannot be separated from function.” Trends Cogn Sci 15(8): 358–364.
  • Dehaene, S. and J.-P. Changeux (2011). “Experimental and theoretical approaches to conscious processing.” Neuron 70(2): 200–227.
  • Ferrarelli, F., M. Massimini, S. Sarasso, A. Casali, B. A. Riedner, G. Angelini, G. Tononi and R. A. Pearce (2010). “Breakdown in cortical effective connectivity during midazolam-induced loss of consciousness.” Proc Natl Acad Sci U S A.
  • Hoel, E. P., L. Albantakis and G. Tononi (2013). “Quantifying causal emergence shows that macro can beat micro.” Proc Natl Acad Sci U S A 110(49): 19790–19795.
  • Joshi, N. J., G. Tononi and C. Koch (2013). “The minimal complexity of adapting agents increases with fitness.” PLoS Comput Biol 9(7): e1003111.
  • Kim, J. (2010). Philosophy of Mind, Westview Press.
  • Leibniz, G. W. (1988). Discours de métaphysique et Correspondance avec Arnaud. Paris, Vrin.
  • Levine, J. (1983). “Materialism and qualia: The explanatory gap.” Pacific philosophical quarterly 64(4): 354–361.
  • Lipton, P. (2004). Inference to the Best Explanation, Routledge/Taylor and Francis Group.
  • Mach, E. (1959). The Analysis of Sensations and the Relation of the Physical to the Psychical: Translated from the 1st German Ed. by CM Williams. Rev. and Supplemented from the 5th German Ed.[1906] by Sydney Waterlow. With a New Introd. by Thomas S. Szasz, Dover Publications.
  • Massimini, M., F. Ferrarelli, R. Huber, S. K. Esser, H. Singh and G. Tononi (2005). “Breakdown of cortical effective connectivity during sleep.” Science 309: 2228–2232.
  • Massimini, M., F. Ferrarelli, M. J. Murphy, R. Huber, B. A. Riedner, S. Casarotto and G. Tononi (2010). “Cortical reactivity and effective connectivity during REM sleep in humans.” Cognitive Neuroscience.
  • Mudrik, L., N. Faivre and C. Koch (2014). “Information integration without awareness.” Trends in Cognitive Sciences .
  • Oizumi, M., L. Albantakis and G. Tononi (2014). “From the phenomenology to the mechanisms of consciousness: integrated information theory 3.0.” PLoS Comput Biol 10(5): e1003588.
  • Pearl, J. (2000). Causality: models, reasoning and inference (Vol. 29). Cambridge: MIT Press.
  • Solomonoff, R. J. (1964). “A formal theory of inductive inference. Part I and II.” Information and Control 7(1–2): 1–22; 224–254.
  • Sullivan, P. R. (1995). “Contentless consciousness and information-processing theories of mind.” Philosophy, Psychiatry, & Psychology 2(1): 51–59.
  • Tononi, G. (2008). “Consciousness as integrated information: a provisional manifesto.” Biol Bull 215: 216–242.
  • Tononi, G. (2012). “The Integrated Information Theory of Consciousness : An Updated Account.” Arch Ital Biol.
  • Tononi, G. (2013). On the irreducibility of consciousness and its relevance to free will. Is Science Compatible with Free Will?
  • Tononi, G. and C. Koch (2008). “The neural correlates of consciousness: an update.” Ann N Y Acad Sci 1124: 239–261.
  • Tononi, G. and C. Koch (2014) “Consciousness: Here, There, but not Everywhere.” arXiv:1405.7089.
  • Tononi, G., O. Sporns and G. M. Edelman (1999). “Measures of degeneracy and redundancy in biological networks.” Proc Natl Acad Sci U S A 96: 3257–3262.
  • Treisman, A. M. and G. Gelade (1980). “A Feature-Integration Theory of Attention.” Cognitive Psychology 12(1):97-136.
  • von Arx, S. W., R. M. Muri, D. Heinemann, C. W. Hess and T. Nyffeler (2010). “Anosognosia for cerebral achromatopsia—a longitudinal case study.” Neuropsychologia 48(4): 970–977.
Personal tools

Focal areas