Yeah, also in principle. But the cleanroom approach isn't technically required for humans either -- it became standard because the legal notion of a derived work is very fuzzy and gradually changing, and lawsuits are expensive and chancy, so you want a process that's provably not infringing. "Yeah I learned some general ideas from this code, but I didn't derive any of my code from theirs" seems to be a logical rats-nest. With the explainable-AI approach to this particular problem, the more intelligent the AI, the more this solution is like analyzing brain scans of your engineers. If your engineers could have produced "derived work" without literal copying, why can't the AI?
I agree, but we aren't anywhere close to that level yet. For that to be true, I think the AI should possibly have the ability to explain the code it created. What we have now is basically a fancy markov adlib code completion tool.
A more intelligent agent should be able to tell you where it learned all of its knowledge from. I personally would like my AI to be above "gut level instincts" otherwise it reinforces blind trust.