I have added "a puff of air to the eye" (rather than an "electric shock") as an example of an unconditioned stimulus. I have not added the words "that elicits a response" because that is the defining property of "unconditioned stimulus."
I have inserted the comment that rewards are usually taken to be psychologically hedonic while reinforcers need not be.
I have a problem with the third reviewer suggestion. While I am not sympathetic to such attempts, it is, as I originally stated, "attempts to develop animal models for the subjective correlates of motivation," not "studies in animals of the brain mechanisms of motivation and reinforcement" that is the subject of most of the last paragraph. Thus I reworded the sentence to stress the direct nature of the human studies: "For example, studies in humans of the subjective correlates of motivation and reinforcement and attempts to model subjective states in animals have led to..."
I have made a few changes in the text, but I do believe a more subsantial change is necessary concerning Skinner's views. It is not really correct to say "For Skinner there is no relationship to be strengthened; there is no stimulus to participate in an association. There is only the operant, tied only probabilistically, not causally, to any antecedent event with which it might be associated." Yes, BFS assumed probabilistic not direct causation, but no, he does not exclude stimuli. On the contrary he always defined the operant in terms of what he called (vaguely, it must be admitted) a "three-term contingeny", i.e., a response is reinforced in the presence of a stimulus. The stimulus is termed "controlling" (some contradiction with the probabilistic idea? Yes, absolutely.) and called a "discriminative stimulus". The operant is all three elements, not just the response. Just what is "strengthened" by reinforcement is (intentionally, I suspect) left undefined in Skinner's system.
"Thus in the Skinnerian framework, it is the association between a response and its outcome that is learned and “reinforced.” Again, to be fair to Skinner he studiously avoided the "what is learned?" question much studied by Hull and others, so this is also not quite right. BFS never addressed this issue, prerring to ask rephrase it is terms of questions about generalization.
Since, Skinner is obviously controversial, and these philosophical issues are rather tangential to the topic, I would be inclined to say less about his ideas.
"In this case the stimulation has a memory-dependent reinforcing effect but also memory-independent momentary “priming” effect. The priming effect energizes the animal and briefly increases the probability that the response that earned it will be repeated. " But food reward also has this effect -- see a well-known pigeon experiment by Killeen (Killeen, P. R., Hanson, S. J., & Osborne, S. R. (1978) Arousal: its genesis and manifestation as response rate. Psychological Review, , 85, 571-581)
I have deleted the suggested addition about parsing reward into liking, wanting, etc. My reasons are, first, that this definition is a definition of reinforcement, not a definition of reward. The Berridge and Robinson view that reward can be parsed into liking, wanting, and learning makes no mention of reinforcement, and belongs, perhaps, in the definition of reward. It does not really belong here, however, as "reinforcement" is a term designed at least in part to get away from subjective labels and mental causes like "liking" and "wanting." Moreover, it is not an accident that Berridge and Robinson avoid discussing reinforcement; their view of things does not deal with--and can not, in my view--deal with the classic scholarly distinctions between drive, incentive motivation, and reinforcement. They are interested in the subjective experience, not the objective control, of behavior. So I don't think their view belongs under this topic. Under the topic "reward," certainly, but not under the topic "reinforcement."