Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Imagine a paying Adobe CC customer.

They use Firefly to generate a poster, and unbeknownst to them, the image it generated is a reasonable facsimile of a copyrighted/trademark character.

The person has inadvertently committed copyright infringement.

So does Firefly need to come with a warning?

The safer solution, to the chagrin of another commenter, is for Adobe to neuter the tool by only training on data in which Adobe has express permission to use.



Surely with all our contemporary AI prowess we can train a model that identifies "reasonable facsimiles of copyrighted/trademark characters" after generating them and alert the user that it could be argued as such. Still, let the user decide.

We do not need creative technology to regulate observance of copyright law.

(By the way I think the chagrined other commenter was yours truly ;-))


With that approach you risk ending up in a very frustrating loop of copyrighted works... A bit like picking a name in an MMORPG that's been out for a few months ends up being a hell of constantly getting your name requests rejected over and over again.


Not really. You just have checkbox somewhere that says "don't copy" and then it won't.

Leave the decision to the user instead of baking it into the technology.


Pretty sure "just having a checkbox" is a massively difficult problem when it comes to AI tech and baking it in is much easier.


A simple warning that what’s been generated looks similar to something that’s copyrighted is not a bad idea. Then it’s up to the AI user to do their due diligence if they intend to use the resulting work for commercial purpose. Neutering the tool from the get go is a step too far.


People accidentally recreate other companies logos in Adobe Illustrator all the time.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: