This is one of those very profound areas -- what exactly does it take to create an artificial intelligence that is genuinely intelligent? An unfortunate thing to me is there seems to run a strain of pragmatism-smelling thinking in a lot of AI folk. Notably, not Deutsch, who seems to take things like the problem of what is conscious/what isn't seriously, even if I think he acknowledges we don't know the answer yet.
But basically, the main point to be reckoned with here is whether a being learning something is an objective fact (I seem to lean this direction), or just a matter of physically behaving in a way that it is possible to interpret as that (so for instance, we interpret various physical systems as running linux, and doing various intelligent things, but it seems like our minds may be learning/thinking in a more objective sense). Notably, people of the latter school don't worry about things like whether there's an objective difference between seeing the color red /recognizing it and simply knowing the mathematical laws of physics relevant to the brain/light interactions involved.
They'd say the seeing of red is just a useful fiction we use to describe that interaction.
I so far have put this brand of thinking under the kind I tend to be wary of -- namely the 'shut up and calculate' variety. While that may produce some results, there's no reason to think that's all there is to knowledge.
This background is relevant, because I do think if one wants there to be any objective oughts, not just the 'behaves as if' kind, some kind of objective meaning to positive and negative needs to be there. feeling things like agony seems to offer the foundation for such objective meaning in a way nothing else I've seen does
I have no idea what other ways there are of being objectively intelligent (that is, where it's an objective fact that learning has taken place), but we can at least say conscious creatures like ourselves seem to be one example