Because no one cares or can afford to go after a few hateful posters, especially anonymous ones.
Fraudsters aren't necessarily even in U.S. jurisdictions.
> Why not assume the same bad actors would just use expanded liability as a weapon themselves?
Oh, the big platforms would not let this happen. Otherwise trolls would already be wielding CSAM as a blunt weapon, but look how effectively that is quelched.
Two multi-billion dollars lawsuits by a fraudster that I'm personally being victimized with right now beg to differ. Scammers and abusive people absolutely do abuse the courts, the rate is lower than other kinds of abuse because they have to be well funded to do it-- but unlike rude or threatening comments online you can't just ignore a court.
I can also say that first hand that Wikipedia likely would have been destroyed in 2006 by vexatious litigation if it weren't for S230. I've been involved in a number of other online forums and people trying to extort through legal threats is basically a constant, I'd be surprised if HN doesn't get them. With S230 these threats are fairly toothless. If they had any bite at all most smaller services just couldn't exist because the cost of dealing with them quite easily dwarfs the cost of providing the forum.
The fundamental issue I think your view faces is that even without S230 protection there is a lot of bad stuff in the world that we just can't stop. You could conjecture some further restriction of S230 that was narrow enough that it wouldn't make the liability an abuse vector, but since there is so much bad that we can't stop even where S230 is helpless, and even when the parties aren't like the big social media platforms and nearly immune to litigation... it's hard for me to imagine how limitations narrow enough to avoid abuse won't just also be pointless/ineffective.
I fully agree that there is bad crap out there-- but that doesn't mean that something can actually be done about it.
It's hard to discuss without a concrete proposal. Advocating for it absent one as you've done also seems dangerously close to advocating for any reduction, well considered or otherwise. I think at the end of the day our problems are anti-trust not content liability. The horrible practices of platforms wouldn't be such a big deal were it not for network effect lock-ins.
Yeah, I would never want these exceptions to apply to smaller companies.
> I think at the end of the day our problems are anti-trust not content liability. The horrible practices of platforms wouldn't be such a big deal were it not for network effect lock-ins
Yeah, touche.
Antitrust just seems like an intractable problem in the current political climate, while punching large-cap-only holes in 230 (even if for the wrong reasons) feels reachable.
Fraudsters aren't necessarily even in U.S. jurisdictions.
> Why not assume the same bad actors would just use expanded liability as a weapon themselves?
Oh, the big platforms would not let this happen. Otherwise trolls would already be wielding CSAM as a blunt weapon, but look how effectively that is quelched.