They use Firefly to generate a poster, and unbeknownst to them, the image it generated is a reasonable facsimile of a copyrighted/trademark character.
The person has inadvertently committed copyright infringement.
So does Firefly need to come with a warning?
The safer solution, to the chagrin of another commenter, is for Adobe to neuter the tool by only training on data in which Adobe has express permission to use.
Surely with all our contemporary AI prowess we can train a model that identifies "reasonable facsimiles of copyrighted/trademark characters" after generating them and alert the user that it could be argued as such. Still, let the user decide.
We do not need creative technology to regulate observance of copyright law.
(By the way I think the chagrined other commenter was yours truly ;-))
With that approach you risk ending up in a very frustrating loop of copyrighted works... A bit like picking a name in an MMORPG that's been out for a few months ends up being a hell of constantly getting your name requests rejected over and over again.
A simple warning that what’s been generated looks similar to something that’s copyrighted is not a bad idea. Then it’s up to the AI user to do their due diligence if they intend to use the resulting work for commercial purpose. Neutering the tool from the get go is a step too far.
They use Firefly to generate a poster, and unbeknownst to them, the image it generated is a reasonable facsimile of a copyrighted/trademark character.
The person has inadvertently committed copyright infringement.
So does Firefly need to come with a warning?
The safer solution, to the chagrin of another commenter, is for Adobe to neuter the tool by only training on data in which Adobe has express permission to use.