Not really: the parrot must have an internal representation of 'color' as something (a concept / quality) that can be had and shared between things. It (hm, s/he) must therefore understand that it is (it must represent it as) an abstract quality in the sense that it can be had by different things (likewise with 'shape').
Well so what? This entails (I would argue) that it has the capacity to store and operate on abstract things (more or less the definition of 'symbol') - and in this very sense, the parrot has the faculty for symbolic processing/operations. Not only does it extract properties from sensory data and generalize them into concepts (things can be wibbly-wobbly-in-way-A, or wibbly-wobbly-in-way-B (first order of abstraction (well, n+1 really)) => things have wibbliness, and some things have the same kind of wibbliness (second order (or n+2))), but also, it may operate on these concepts and use them in assertion/negation propositions: some objects have the same color but not the same shape. (I use the term proposition in the more general logical sense, not necessarily as in 'a linguistic assertion'; but when asked, the parrot provides an answer that can be put in a truth-table so to speak.)
Interestingly, that's one of the ways to actually define 'semantic content': something which provides truth-conditions (e.g. some language-formalizing systems would say that the proposition "The present King of France is bald" has no semantic content / has no meaning because it is neither true nor false (presently there is no King in France, and the reference fails) (though Russell would disagree ('how can it be neither true nor false?') (if interested: "On Denoting", 1905.))
So what #2? Well, one could argue that symbolic apparatus + semantics (a set of truth-functions / grounds for constructing them) => 'language', so it's kind of a big thing, I'd say. (At the very least, evidence towards abstract thought in animals is always somehow very interesting for me.) Again: symbols (abstracting from properties => projecting these abstracts back onto sensory data) + symbolic processing (basically, ability to ascribe truth/falsity to/via them) => darn interesting. There's a whole hot debate how/whether this can be achieved 'bottom-up' from neural networks (Connectionism) (is 'symbolic processing' the proper way of reduction? Even if cognition is 'symbolic' in the end, perhaps it is best to model this starting bottom-up and arriving at 'symb.processing' as an emergent/epiphenomenal capacity?) - or, whether you need 'innate capacity' for symbolic processing / language ('language genes') - top-bottom language faculties (that all add up to those same neural networks, but simply, you won't get much by starting bottom-up) (Classicism / Nativism (not really interchangeable, using loosely / generalizing, etc.))
> The parrot sees a green key, and a green cup, and answers "color" to "what's same?" and "shape" to "what's different".
For all I know, the parrot recognizes both objects having the same color, and I assume that "shape" means nothing more than "equality" and does not imply attribution of concept or quality, until the parrot can also tell the difference between for example something wet and something dry. In my skepticism, the only meaningful ability of the parrot is to say out loud if the color matches.
Which, IMHO, does not imply as much as you say. There must definitely be some internal model of the world for this to happen. You might be right when you say that there is attribution of quality, concepts, objects, and they share qualities - but I remain skeptic. For instance, an ant will recognize a sugar and react to it, and react differently to for ex. something noxious, but we know that ants are not capable of any complex internal models.