You seriously can't think of a single idea for how to ban toxic players at scale?
Start with crowd-sourcing via reports, then pass it over to customer service once a critical mass has been achieved. Make sure one person can't easily create a new account after being banned. This really isn't a new problem, tons of companies of all sizes have tackled it with varying success.
Spoken like someone who’s never played the game. Get ready to be banned for no fault of your own, just because your three teammates decided to report you for the lulz. There are people who post on /r/dota2 complaining that they get banned from the game for nothing but picking techies each game. They don’t even say anything. People just report them immediately.
By the way, streamers are one of the most high impact users of the game. They are also one of the most unfairly targeted by report systems. This problem is extensively studied and very tricky.
I've been playing DOTA since the Warcraft 3 days, and a lot of HoN. I'm seriously grateful for the excellent Linux client.
Today I've moved over to Dota 2 and now I see that I no longer get banned and moved to unranked for picking techies. I play him at ancient/divine levels and getting reported is very common. Techies is my favorite hero as he is more about strategy than quick precise reactions. I think Valve has done some progress in this area, especially as you now can commend players.
Just because there are false positives or one company has a bad implementation doesn't make it an unsolveable problem.
As parent mentioned, ml sentiment analysis + logging + peer reporting + final human analysis = problem effectively combated
It's the final human analysis that companies are loathe to fund (e.g. Facebook), as costs scale with user count.
But it's not efficiently unsolveable with current tech. Therefore, companies simply aren't prioritizing it. And won't, as long as the impact of toxic users isn't impacting the bottom line.
Kay. What are the odds that some HN commenters are going to solve the same problem that has been an active priority for years of the people in the field?
I can’t speak for Valve or Riot, but at S2 it was a concerning issue. There’s just no good way to do it when the people involved are actively malicious. If you think there is, get ready to have your community collapse around you as everyone complains about unfair bans.
I don’t think you really appreciate the scale of the problem. Final human analysis is not possible when there are literally millions of games per week. It’s also not something that ML can identify cleanly — the moment it does, the culture will adapt to bypass the evaluator. It always does.
I absolutely appreciate the scale of the problem, and the adversarial nature of peer reporting. My day job has those same characteristics.
Millions of games per week * 30 minutes per game * avg lines of chat per minute = manageable w/ a proper streaming architecture
Especially when you have access to a massive, perfectly-scaled, distributed edge compute system. (i.e. running minimal, performance-optimized models on users' opponents' clients to do the initial detection / filter / compression pass)
But my point is this is fundamentally an economic problem, given current state of the art, not a technical one.
Companies are looking for pure-technical solutions because they're cheaper, and then complaining that it's a hard problem because they're unwilling to properly fund hybrid systems until state of the art can deliver.
ML is a first order approximation of human ability, not a magic unicorn that gives you exactly what you want. Thats the definitions of engineering: how do I build a system that fulfills my requirements from the pieces I have, not the pieces I wish I had?
So I don't feel much pity when companies allow toxic user bases to flourish because it's cheaper than funding solutions.
* Above intended in no way to belittle the awesome work folks are doing in the space with ML detection. But sometimes as engineers we need to admit when management is making unethical choices for financial gain
On the base level, it is not very hard to examine a chat log and glean who is being blatantly toxic from the log.
A big problem with games like League of Legends or Dota 2 is that you can easily be toxic or cause your teammates to be toxic without chatting or using voice comms.
There are very common trolling methods that do not require any use of chat with the express purpose of trying to incite toxicity in other players, some blatant, and some not.
However, the bigger problem is that honest mistakes can be misinterpreted by your teammates as toxicity:
Losing a close 1v1 vs your laning opponent
vs
Accidentally going too deep into enemy territory and dying once.
vs
Getting killed while attempting to secure map control for your team.
Vs
Playing too aggressively and overextending and dying many times over the course of a game.
When things like this happen, your own teammates may become upset at your poor performance and begin to lash out.
The biggest problem here is that in these games, it can feel like you have no agency over the outcome of the game when your teammates do not perform at the perceived skill level you have of them.
This is where toxic players become hard to deal with. They will start doing things that will incite toxicity in their teammates while maintaining plauisble deniability:
- Confusing teammates by providing useless or inaccurate information about the current gamestate. (Pinging, map calls, cooldowns, timers, etc - many of these require no use of chat of voice comms)
- Picking on teammates by making consistently selfish plays to their detriment.(Courier stealing, going out of one's way to steal farm from a lower position teammate, unnecessary kill stealing)
- Improper role identification, your team strategically expects you to do X, you do Y. Y could even be better than X in terms of winning the game, it doesn't matter.
All of these above examples can either be common gameplay mistakes or intentionally malicious, but the point is that once your teammates do not trust one another, some will start verbally abusing, while others will begin to make similar mistakes as above (tilting) and lose the game for their own team.
Many players want to feel like they were the influencing factor that decided the game's outcome, and make choices that increase the influence they have on the game even if it might actually lower the chances that they win - and this is what many times leads to toxicity.
Np. I didn't take it as an insult. And I know what you meant -- I've seen the cool work that's come out of teams working at game shops.
And I hope I'm wrong, but I get the feeling they (and community managers) are working without the full support of their management to fix the problem. E.g. bandaid the broken bone
You're quite cocky. That's not an insult. I think if you cut your teeth on this problem for a couple months, you'd look back on this comment with some mixed feelings.
When a system is inherently toxic, there is no way to make it not-toxic short of coming up with a different substance. We can't make nuclear waste pure just because we wish it were. This is a very similar situation to this particular type of game. When it's a team game, and you rely on people who aren't doing their job, and you're separated by distance, it brings out the worst in you.
The problem isn't chat. If it were, it would be solved already, for the reasons you point out. The problem is the people. When someone doesn't like someone else, they will find a way to ruin their game while bypassing your censors. And yes, you can ban some of them, but not most of them, and not when most of the community acts like this. Which they do. Which you can't understand unless you go play the game for a bit.
Toxicity is learned and reinforced (or not) behavior.
In the same way that HN comments are generally pretty constructive and charitable towards each other (and substantially more so than similar forums), so does negativity spread.
I'm not expecting any company to be able to turn their community into a paragon of empathy overnight. But I do believe that putting systems in place that punish poor behavior and reward good behavior has an impact in the long run.
From my first comment:
> Therefore, companies simply aren't prioritizing [addressing the problem]. And won't, as long as the impact of toxic users isn't impacting the bottom line.
Because I honestly don't believe they care, in any non-monetary sense. They're essentially amoral. If they can make a billion dollars while people shout racist or sexist slurs at each other, they're fine with that.
And that's the primary annoyance I have with game companies. We hear "The player base is caustic, and we can't do anything about it, because it's too hard." When I believe reality is closer to "We ignored this problem as an industry (boy will be boys!), and now we don't want to be blamed for helping raise a generation of borderline sociopaths."
And yes, I spent many a college evening watching friends troll internet strangers in DotA, pre-LoL. So I'm familiar with the concept and execution.
The chat is not the only way you can troll your team mates and cause them to rage quit (or scream at you and get banned themselves). I would actually say that the griefing created by people using slurs is actually pretty small.
You can be silent and ruin hundreds of games. Distinguishing that from genuinely bad players is not really easy.
How do YouTube, Facebook, and others ban illegal, unsavory content? Teams of moderators with good tools. Unfortunately, also a thankless job with the potential to give PTSD.
Youtube, Facebook and company have a ridiculous amount of false positives, for the money (and the people, they have quite a lot of human moderators) they throw at the problem. Also, they enjoy complete market dominance, so even if you want to move out of youtube... where are you going to go?
With the same rate of false positives there will be enough community rage (I guarantee someone is going to find a way to have your system ban the top 10 streamers of your game after 2 days you put it in place) your players (both good and bad) will go somewhere else. :)
The problem youtube, facebook and similar companies face is whole orders of magnitude more complicated than policing a game chat. It is, for example, highly unlikely that you will be censuring honest content when you encounter the words 'gas' and 'jews'.