Talk:Attractor network

From Scholarpedia
Jump to: navigation, search

    Major comments: 1) This article could use a more general introductory paragraph that addresses the computational neuroscience elements of attractor networks and is less formal mathematically. Below is an example from an article from the encyclopedia of neuroscience by XJ Wang (in press): "The term ‘attractor’, when applied to neural circuits, refers to dynamical states of neural populations that are self-sustained and stable against perturbations. It is part of the vocabulary for describing neurons or neural networks as dynamical systems. This concept helps to quantitatively describe self-organized spatiotemporal neuronal firing patterns in a circuit, during spontaneous activity or underlying brain functions. Moreover, the theory of dynamical systems provides tools to examine the stability and robustness of a neural circuit’s behavior, proposes a theory of learning and memory in terms of the formation of multiple attractor states or continuous attractors, and sheds insights into how variations in cellular/synaptic properties give rise to a diversity of computational capabilities." [auth: done]

    2) It might help to show either an energy landscape and/or a 2-D set of trajectories converging on multiple fixed points to illustrate the Hopfield-type point attractor models for memory. This could also help to illustrate the use of the attractor--how corrupted versions of the memory are error-corrected via the dynamics of flow to the idealized memory represented by the attractor [auth: done]

    Smaller comments: 1) You might replace/define "nodes" in the opening paragraph, as the biological network audience will think of these as "neurons". Perhaps say " 'elements' ('neurons' in biological neural networks or 'nodes' in artificial neural networks). [a: done]

    2) I found the Terminological note distracting and not useful. Its use on distinguishing line attractors from neural integrators is also not entirely correct (see below), making it less useful. I think removing this here, defining "network state space" when first needed, and replacing "attractor space" by "attractor" would suffice.

    3) I think you mean k less than 0 for the isolated fixed point attractor equation. You should refer to this as a "trivial" solution rather than "degenerate". Note that these types of systems are commonly used in networks that follow input (e.g. if the equation is driven by some function, then this type of equation is commonly used to represent a characteristic time scale for approach to the steady-state driving input)--thus "limited interest" might be too strong. The equation as written (undriven) also represents the state of being spontaneously inactive, which represents many undriven systems. [a: noted/updated]

    4) For the line, ring, & plane attractors, perhaps rather than having just the manifold, it would be worth continuing the theme of the Point attractors section and giving a 2-D set of equations with a zero eigenvalue. Most simply, you could do:

    dy/dt = -y dx/dt = 0 (i.e. the k=0 case of above example)

    to illustrate a system that has the line y=0 as an attractor in the x-y plane. (Or you could do a 3-D case corresponding more closely to the nice Figure 2 illustration). [a: done]

    5) On line attractors vs. neural integrators: line attractors refer to a 1-D continuous set of fixed points. Neural integrators refer to systems that integrate (in the sense of Calculus, as shown in the article) an input. A system whose dynamics form a line attractor may not integrate its inputs--whether it does so depends on the form of the inputs, i.e. whether the inputs are arranged such that the distance traveled along the line attractor is proportional to the input. [Analogously, real biological neural integrators are typically not perfect line attractors, as they exhibit a slow decay of activity: thus, strictly speaking they are fixed point attractors that behave approximately like line attractors because of very slow dynamics along 1 dimension.] [a: I've updated the discussion to reflect these kinds of concerns, noting both the approximation, and specific class of line attractors relevant for integration]

    6) For ring attractors, the bump still is typically interpreted as representing a single value (typically the one at its peak). The key difference from the oculomotor integrator is in the type of encoding: the oculomotor integrator might be described as using a "rate code" in the sense that the neuronal firing rates in a single population increase monotonically with the encoded variable (i.e. rate changes monotonically with the encoded eye position). In the bump attractors, a "location code" is used, meaning that the location about which the population activity peaks represents the encoded variable (e.g. the head position). [a: re-written to be clearer. I didn't adopt the suggested terminology because 'rate code' could be confused with the rate/timing debate.]

    7) Figure 5 caption: What is meant by saying that the bump may be represented by "A POINT" on a plane attractor. Are you referring to the point at the peak of the bump? Referring to the main text on plane attractors (2nd paragraph), doesn't a function define a set of points rather than "a point". [a: it is 'a point' because that point is the coefficients of the basis of the function space, which defines the function in the lower dimensional space. hopefully this is clearer now.]

    Missing capacity

    The article offers a valuable introduction to the types of attractors found in attractor neural networks, but in my view it does not say much about the specific neural network features that set them apart from generic dynamical systems with attractor dynamics.

    One suggestion is to describe the Hopfield model in more length, rather than using the few equations in the article, which inevitably focus a reader’s attention, only to provide examples of simple dynamical systems that are not really relevant to understanding neural networks. [a: note that the hopfield network has its own entry in scholarpedia already, so i didn't want to repeat that material. i've discussed it a bit more, and the relevant links will now show up]

    In the first part, the article omits to mention the central issue of the storage capacity of attractor networks, which has generated much substantial analytical work by theoretical neuroscientists using statistical physics techniques.

    Further, in the “biological interpretation” part, although line attractor networks for oculomotor control and head-direction ring attractor networks provide useful pedagogical examples of attractor networks in which storage capacity is not an issue, many more readers would be interested in cortical and hippocampal networks, where attractors are conceived as representing long term memories, and storage capacity is “the” issue. The article could discuss: [a: I've inlcluded a number of these suggestions, but would note that the attractors I mention (e.g. plane attractors) have been used to support cortical and hippocampal models, hence I haven't introduced a lengthy discussion on storage -- which is also more relevant to the specific hopfield discussion. However, I have highlighted storage more than before].

    - analyses of storage capacity for uncorrelated memories, with a reference to Elizabeth Gardner’s approach and to Tsodyks and Feigel’man key (1988) result for sparse coding.

    - uncorrelated versus correlated memories in light of the episodic/semantic characterization of memory systems, and the issue of the “learning rule”.

    - the multiple chart model of Samsonovich and McNaughton (1987) and the analysis of its storage capacity, as a model relevant to hippocampal networks and as a non-trivial example of coexistence of multiple quasi-continuous plane attractors

    - models of local cortical attractors exhibiting a continuous representation for position and a discrete one for identity, as in Treves (2003); and the analysis of their storage capacity.

    - models for the storage of temporal sequences, and plane attractors with non-trivial temporal dynamics, as used to model hippocampal phase precession.

    Finally, the article should mention recent experimental evidence in support of attractor behaviour, e.g. the Willis et al (2005) paper.

    Hopfield Nets Hyperlink

    The text referring to the Hopfield Network should be turned into a hyperlink to the Hopfield Networks article. [a: done]

    comments by Alexei Samsonovich

    In my opinion, this article needs a serious improvement. I cannot provide a comprehensive review now, because I see too many problems with this article that would require a substantial amount of time to address all of them (therefore, I would rather not become a co-author of that article). The article suffers from the lack of mathematical accuracy and is not addressing the topic of attractor neural networks fully (by the way, is it supposed to be about attractor NEURAL networks? this is not obvious from the title and from the content of the article). I would probably disagree that the terms "line attractor" and "neural integrator" are synonyms. I would definitely disagree that any network with persistent activity and no input is "acting as an attractor": it appears that the notion of an attractor is misunderstood by the author, at least, in this case. Another example of a problem is that many key classic sources in the attractor neural network theory literature are not mentioned (e.g., Hertz J, Krogh A, Palmer RG, 1991. Introduction to the theory of neural computation. Addison-Wesley: Redwood City, CA), and the writing seems orthogonal to them.

    [auth: This review is not helpful as it is not specific. I have made two small changes which address the stated concerns: 1. I included the words 'neural network' in the short defn; 2. The article explicitly states that neural integrators are a *subclass* of line attractors so the 'synonym' concern is misplaced. However it was not explicit in that discussion that the line must be stable to small perturbations -- it is now. There is no hope of citing all or even most of the 'classic' works on this topic as there is a huge number. I have cited the earliest influential work (Amit, 1989) and other works directly related to the discussion.]

    Personal tools
    Namespaces

    Variants
    Actions
    Navigation
    Focal areas
    Activity
    Tools