Talk:Continuous attractor network

From Scholarpedia
Jump to: navigation, search

    Contents

    Reviewer B Comments

    This is a nice review of a concept that is becoming more and more important in computational neuroscience Here are just a few comments, the last 3 more important, the rest are very minor

    1) it would be necessary to explain which experimental findings actually suggest that grid cells are a better candidate than CA3 for an underlying CANN mechanism. Maybe a mention of the alternative models to explain grid cells (The Barry/Burgess/O'Keefe/Hasselmo model) here?

    2) the graphics in figure 3 actually comes originally from McNaughton et al. Nat Rev Neurosci 2006, Witter and Moser 2006 have a reprint of that.

    3) "CANN model is typically defined by (i) selecting a manifold as a base for the CANN": it would be clearer to the readership if you could give examples to explain what you mean by manifold. See if you like my edits of the paragraph

    4) "From a practical point of view, the merging occurs at a finite N (the value depends on the model), when perturbation thresholds separating individual attractors become small compared to intrinsic noise.": This is a bit more complicated than this. For finite N, small random fluctuations in the synaptic matrix (for example if the units are not arranged in a regular lattice on the manifold), may collapse the continuous attractor into only a small number of stable configurations (see e.g. the work of Tsodyks and Sejnowski Int. J. Neural Systems 1994), also the related work of XJ Wang on continuous attractor nets with spiking neurons

    5) I was surprised (given that the author was one of the initial proponent of the concept) that no mention is made of the "hidden layer" mechanisms to perform path integration with a continuous attractor.

    6) The review is very hippocampo-centric, but continuous attractors have been used to model other systems in neuroscience. For example the oculomotor system (work by Tank, Seung, etc.) and working memory in the prefrontal cortex (Wang, Gutkin, many others)

    Author's Response

    > This is a nice review of a concept that is becoming more and more important in computational neuroscience

    Thank you!

    > Here are just a few comments, the last 3 more important, the rest are very minor

    > 1) it would be necessary to explain which experimental findings actually suggest that grid cells are a better candidate than CA3 for an underlying CANN mechanism. Maybe a mention of the alternative models to explain grid cells (The Barry/Burgess/O'Keefe/Hasselmo model) here?

    Here are my reasons: (i) There is no experimental proof that CA3 is an attractor network, and a possibility exists that it is entirely driven by the input. If so, all the CA3 place-cell-related phenomena can be understood based on the properties of grid cells - this appears to be a parsimonious explanation, again, because of the lack of experimental studies of intrinsic attractor dynamics in CA3. (ii) In MEC you see a persistent, regular grid of place fields that is coherent at a long behavioral range within each anatomically local population of grid cells (MEC module), the existence of which does not depend on sensory input. As far as I am aware, this phenomenon is not observed anywhere else in the brain. Moreover, this phenomenon appears to be autonomous within each half-millimeter MEC module (grids in different modules re-align independently from each other across environments, only the relative orientation of grids is preserved, apparently, because they all receive the same head direction input). Therefore, it is not only likely that this phenomenon has local origin: there seems to be no alternative possible explanation, and this explanation is consistent with available anatomical and neurophysiological data about MEC. Moreover, place-related hippocampal phenomena, such as the phase precession, originate in MEC. I included some of this line of reasoning in an abbreviated form in the article, although this is not its central topic: it is a topic of the article http://www.scholarpedia.org/article/Grid_cells which I linked to my.

    Regarding the Barry/Burgess/O'Keefe/Hasselmo models, I personally do not believe in this explanation. Of course, this would be not sufficient to say in response to a review. In short, I believe that the model assumptions and predictions of those oscillatory models are not consistent with all reliable documented experimental data about the grid cells. Most importantly, equations that predict the grid firing based on interference of oscillations and path integration mediated by speed-to-frequency conversion modulated by the head direction system are just to brittle to account for the robustness of the grid phenomenon. Thank you, however, for the comment: I included references to these works in the revised article.

    > 2) the graphics in figure 3 actually comes originally from McNaughton et al. Nat Rev Neurosci 2006, Witter and Moser 2006 have a reprint of that.

    Thanks, I corrected this.

    > 3) "CANN model is typically defined by (i) selecting a manifold as a base for the CANN": it would be clearer to the readership if you could give examples to explain what you mean by manifold. See if you like my edits of the paragraph

    Thanks, I included a link to the Wikipedia article on manifold (I think, however, that Scholarpedia should have a separate article on manifold too) and kept your edits after minor sub-editing.

    > 4) "From a practical point of view, the merging occurs at a finite N (the value depends on the model), when perturbation thresholds separating individual attractors become small compared to intrinsic noise.": This is a bit more complicated than this. For finite N, small random fluctuations in the synaptic matrix (for example if the units are not arranged in a regular lattice on the manifold), may collapse the continuous attractor into only a small number of stable configurations (see e.g. the work of Tsodyks and Sejnowski Int. J. Neural Systems 1994), also the related work of XJ Wang on continuous attractor nets with spiking neurons

    Thanks for the reference to XJ Wang. I think, however, that your version of the sentence misses the point. I can change my wording, but I would probably disagree that a finite neural network based on any of the well-known NN models can have a continuous attractor (understood as a continuum of points of irrelevant equilibrium) in a rigorous mathematical sense. Of course, it can have extended attractors like limiting cycles or chaotic trajectories, but these are not attractors that are usually associated with the term "continuous attractor" in the biological literature. A finite neural network would only have a discrete set of stable points or limiting cycles. However, in the infinite N limit, they can transform into a continuum of points of irrelevant equilibrium - please, recall your own work, or look in my dissertation. I should also point that in this case it does not matter whether the arrangement of points on the manifold is regular or random, unless the inhomogeneity is significant and survives in the infinite N limit.

    > 5) I was surprised (given that the author was one of the initial proponent of the concept) that no mention is made of the "hidden layer" mechanisms to perform path integration with a continuous attractor.

    Thank you for pointing, I will correct, although path integration is not the main topic here.

    > 6) The review is very hippocampo-centric, but continuous attractors have been used to model other systems in neuroscience. For example the oculomotor system (work by Tank, Seung, etc.) and working memory in the prefrontal cortex (Wang, Gutkin, many others)

    Thank you very much. I will cite these works.

    Reviewers A's and C's Comments and Author's Response

    Reviewers' comments originally posted in the article were moved here by the author (pasted in red as reviewers comments). Not all Reviewers' comments are addressed yet.

    • Section "The notion of a continuous attractor"

    <review>Reviewer's comment: "While, e.g., an attracting periodic orbit is a continuous attractor in a trivial sense, the term "continuous attractor" usually is not associated with these examples" -- this isn't accurate; these are classic cases of continuous attractors</review>

    <review>Reviewer's comment: I agree with the first reviewer. A periodic orbit is not what we usually consider as a continuous attractor. The points on a periodic orbit do form a continuum, but each point is not stable individually and therefore the whole continuum is not a continuous attractor. </review>

    Author's response: The two reviewers actually contradict each other at a very obvious level (therefore I cannot satisfy both), while the second reviewer claims that both reviewers agree. Here are details. The first reviewer says: "these are classic cases of continuous attractors", while referring to a periodic orbit, etc. Then the second reviewer says: "The points on a periodic orbit do form a continuum, but each point is not stable individually and therefore the whole continuum is not a continuous attractor." (therefore, says the opposite to what the first reviewer said, in contradiction with the beginning of the comment). This statement also implicitly suggests that a continuous attractor is a continuum of points of stable equilibrium. This view is not consistent with the traditional understanding of the notions of a continuous attractor and an attractor in general. Here is why. According to the standard definition of an attractor (e.g., Strogatz 1994, p. 324), every point of stable equilibrium in the phase space is an attractor on its own, and is not a point of a bigger attractor; therefore, it is not a point of a continuous attractor. If there were a continuum of points each corresponding to a stable equilibrium (see my example in the edited version of the article), then we would have to regard them as a continuum of attractors, not one attractor, and therefore we would not call them together "a continuous attractor". In contrast, points of a continuous attractor can be points of irrelevant equilibrium, but they cannot be points of stable equilibrium (there is a mathematical difference between the two notions). Returning to the first reviewer's comment, I hope that I addressed and basically resolved the issue in the revised version, accepting the comment. If the other reviewer believes that the term "continuous attractor" should refer to the narrow class only, then I ask the reviewer to rewrite the comment coherently. I also resolved the following issue.

    <review>Reviewer: The author should define what "minimal" means. This is an encyclopedia.</review>

    Section "Charts, activity packets and CANN models"

    (jumping to the bottom of this section)

    While the term "chart" is not widely accepted (sometimes the word "frame" is used instead) <review>Reviewer's comment:yes, for this reason, I think you should stick to the standard word, 'manifold' in the article... you can mention these are called charts or frames once, perhaps but standard terminology is more appropriate for such an article)</review>

    Author's response: the term "chart" here does not replace the term "manifold", but stands for something different: it actually refers to the embedding of neuronal units in the manifold. "chart" is also a standard term in the theory of manifolds, where it has a somewhat related meaning, different from the notion of a manifold.

    (to be continued..)

    Personal tools
    Namespaces

    Variants
    Actions
    Navigation
    Focal areas
    Activity
    Tools