> but because it should help make it obvious that you can't trust anything you see on a screen.
I think this in general (text, audio, video) will produce a societal earthquake, in a way send us into the Middle Ages - you can't really verify things yourself, because everything can be faked, all you can do is anchoring yourself to some trusted authority.
Imagine you read on hacker news an (AI generated) article about a new breakthrough in physics - new convincing evidence for the cyclical universe hypothesis. In the discussion, there will be a lot of seemingly informed comments arguing about this (all AI generated), links to video presentations from reputable scientists (all AI generated) and papers (all AI generated). It will be all wrong (= there wasn't any breakthrough in the first place), but impossible for a non-physicist to assess correctly.
In a way it will lead to centralization of internet and knowledge, people will stick only to their trusted sources. For some it may be Wikipedia and NYTimes, for others some AI-generated island of knowledge/manipulation.
I also wonder what effects this will have on social platforms when 99% of content is generated by AI.
My thought is: if you think conspiracy culture war shitshows are bad now with whatever people on both sides having talking heads saying whatever feeds red meat to their base, imagine trying to bridge the gap in a world with infinite red meat generators competing with each other for audience eyeballs forever.
I think this in general (text, audio, video) will produce a societal earthquake, in a way send us into the Middle Ages - you can't really verify things yourself, because everything can be faked, all you can do is anchoring yourself to some trusted authority.
Imagine you read on hacker news an (AI generated) article about a new breakthrough in physics - new convincing evidence for the cyclical universe hypothesis. In the discussion, there will be a lot of seemingly informed comments arguing about this (all AI generated), links to video presentations from reputable scientists (all AI generated) and papers (all AI generated). It will be all wrong (= there wasn't any breakthrough in the first place), but impossible for a non-physicist to assess correctly.
In a way it will lead to centralization of internet and knowledge, people will stick only to their trusted sources. For some it may be Wikipedia and NYTimes, for others some AI-generated island of knowledge/manipulation.
I also wonder what effects this will have on social platforms when 99% of content is generated by AI.