Almost four years ago, we started Discord to bring people together around games. Fast forward to today, over 200 million people have used Discord to do just that.
While our goals are focused on bringing people together around games, people also use Discord to talk about their open source projects, their favorite bands, tv shows, and more. Moreover, with all this growth comes some people doing bad things on Discord, which we don’t want in our community.
Two years ago, we began building a team focused on keeping our community safe on Discord (and since then, the Trust and Safety team has grown tremendously). Keeping you safe means making sure real-world harm doesn’t come to you, and no matter who you are, you don’t feel like someone else is intimidating you or harassing you away from being able to participate on Discord.
There’s been a lot of chatter recently from our community wondering how decisions are made on Trust and Safety at Discord. Let’s pull back the curtain and give you a look behind the scenes.
Our Trust and Safety Team currently reviews more than 800 reports every day for violations of our Terms of Service and Community Guidelines, in total handling more than 6,000 reports a week. Those reports vary greatly: sometimes the team may be investigating server raids and NSFW avatars; other times it’s removing deeply disturbing content like gore videos or revenge pornography. We also get reports where a person demands we ban another person for “calling them a poopyhead,” while other times someone is being doxxed or in danger of self-harm and a friend of theirs reaches out to us.
Further complicating things, we also get reports from people who use a combination of false information, edited screenshots, socially engineered situations, or mass reports in an attempt to get a person banned or server deleted. We don’t act without clear evidence, and we gather as much information as possible to make informed, evidence-based decisions that take into account the context behind anything that’s posted. We’ll talk about some of the hard decisions we face later in this blog.
In many situations, what happened is pretty obvious — a person has raided a server to post shock or gore content, they’re posting someone’s private information, or they’re directly threatening to harm someone in real life.
There are other cases where the situation is not so simple. Sometimes, parts of a conversation have been deleted, a slur is used as an act of reclamation, or someone is distributing a hack to a game on Discord that is generally used for cosmetic purposes — but could be used to cheat under certain conditions.
The Trust and Safety team seeks out context to best evaluate what’s going on even when things seem ambiguous. To illustrate the complexity of Trust and Safety’s decision making, see the three scenarios below and the accompanying considerations.
Two people report each other for bad behavior. One of them clearly started the harassment, while the other escalated it. It starts out with simple insults, but they’re not willing to block each other.
Eventually, it escalates to where one threatens to shoot the other’s dog and the other responds by making a sexual threat towards the initial person’s boyfriend.
Meanwhile, they’re doing this in a channel that has plenty of other people in it, some of whom are clearly uncomfortable with the escalation, and one of those bystanders in the server writes in too, asking us to do something about it.
Finally, not only is the owner of the server not doing anything, they’re actually egging the two people on, further escalating the situation.
Who should we take action on? Is it the person that started it? The person that escalated to a threat first? Or is it both people, even though each believes the other is at fault? Both people could’ve solved it by blocking the other — should we take any action at all?
How much do we believe each person felt threatened by the other person and thought the only reasonable thing to do was to keep engaging?
Should we also take action on the server owner in some way for egging them on instead of defusing the situation? If we do take action on the server owner, what should it be? A warning? What if one of the server members reports that the owner was privately messaging the two people in order to keep the feud going? Should we punish the server owner instead of the two people?
Someone is banned for messaging people across servers a combination of racial slurs and spam. This person contacts us to appeal their ban.
First, we inform them of the specific Community Guideline they have violated. Then, the banned person asks which specific message led to the ban. They insist they’ve done nothing wrong and never violated the Community Guidelines.
They claim they’ve been an upstanding citizen, are in twenty different Discord servers, and have a host of users that can speak on their behalf. They insist it’s a false ban based on false reporting.
Finally, they enlist some of those people to write in and tell us that the person was maliciously reported. They demand we overturn the ban immediately.
Is the banned person acting in good faith? Do they legitimately not understand how they violated our Community Guidelines?
Are they simply trying to identify the reporter? Should we provide vague information? Will the banned person continue arguing that whatever messages we have are insufficient?
How do we respond to the supporters that are writing in about this ban? How much information should they get about the situation?
Someone worriedly reaches out to us about a DM they received from another person claiming to be Discord staff. The DM is a warning that their messages are being monitored and that if they continued, the authorities will be contacted. They ask us if the message is real.
While the DM isn’t from Discord, the person pretending to be Discord staff contacts us and admits to sending those messages from an alt. The impersonator claims they lied in order to dissuade the initial person from self-harming.
When we investigate, it does appear to be true. The initial person was talking about some harmful activity and after receiving the impersonated warning, they’ve completely stopped.
Impersonating Discord staff is a violation of our Community Guidelines. Most of the time, impersonators engage in extremely harmful behavior and will receive an immediate ban.
In this case, it appears the impersonator has good intentions. Should we take action on the impersonator? Do we just warn them not to do it again? Do we just let it go?
On the other side, should we confirm with the initial person that the message was not from Discord? If we do that, does this encourage them to continue to self-harm?
All of this is a lot to consider and Discord’s Trust and Safety Team is tasked with answering questions like these hundreds of times a day, seven days a week, with each situation different and each one with real people who will be impacted by what we choose to do.
When creating new policy, we evaluate all available information on that topic to understand what the best policy is. We look at academic research on the topic, what other companies do, and what users, non-users, and experts in the field think. We consider whether something is illegal, whether something is harmful, and how scalable our operations are. We leverage all of these checks and balances to remove as much personal bias or interest as possible. We believe we have a deep-seated responsibility to be objective about what we allow or restrict on Discord.
Our decision-making process prioritizes safety from harm. We strive to create a platform that is safe and inclusive, regardless of someone’s race, ethnicity, gender, gender identity, religion, and sexual orientation. After all, everyone can play and enjoy games, and Discord should be that place where anyone can find someone else to play with.
Along with thinking about how we can prevent harm to people on Discord, we consider scale, privacy, nuance, and freedom when developing policy.
How can we scale the enforcement of this policy to our enormous user base? A policy that sounds good but isn’t enforced isn’t actually good policy. It’s important not just to talk the talk but to also walk the walk.
How can we balance our ability to investigate potential bad things on Discord while our users have and should have a right to privacy?
In the real world, Big Brother isn’t watching you inside your home during a conversation with two of your friends, even if you’re up to no good. Should we keep using that model on Discord if it means that people have the ability to chat about bad things, and that Discord may be used for bad actions? Are people okay with automated software or other humans reading their private conversations to stop potential bad actors?
If a potential harm is very hard to discern, is it Discord’s place as a platform to moderate a particular form of speech?
If something is reported frequently and we can’t conclude whether it’s definitely bad (even though users can take some action to protect themselves), should we rely on them to do so?
If someone opens up a server and promises rewards to its members, but people complain that they’re not fulfilling those assurances, should we forbid offering rewards?
With all this said, how can we make sure that good people don’t feel like we’re censoring them? Just because we don’t understand a hobby or interest, does that give us the permission to ban it from our platform?
After considering scale, privacy, nuance, and freedom, we outline all possible outcomes to the best of our ability, and try to find the solution with the best possible answers to those questions.
Eventually, after thorough discussion, research, and talking to third parties, we make our way to a policy document. To get more perspective, we circulate that document to other Discord staff members who aren’t on Trust and Safety and ask for their feedback.
Finally, we arrive at a conclusion, implement the policy, and monitor it. If we receive new information or the policy isn’t having the impact we’d hoped, we adjust our policy to make sure that it’s effective. We’re constantly listening, observing, and wanting to do better, and our policy reflects this as a living, breathing, work in progress.
On that note, we wanted to talk about a recent change we’ve made to uphold our commitment to listening to community feedback.
Over the past couple of weeks, posts have appeared inquiring about Discord’s stance on a niche area of NSFW policy, which is cub porn. A screenshot of an email we sent about a year ago in February 2018 has garnered significant commentary and criticism about our policy.
As our Community Guidelines state, the following immediately results in account deletion:
Furthermore, the following will lead to content removal and a warning (or ban depending on the severity):
One major reason this policy is in place on Discord is because there is a federal law in the United States against the sexualized images of minors, which includes cartoons and drawings. You can see this distinction in action on Google Images which does not show results for lolicon but does show results for cub porn.
Discord’s current policy is that anything human or humanoid is forbidden (including anthropomorphized characters). This includes most cub pornography.
While this is already more restrictive than what the law requires, we’ve received feedback that we’re not comprehensive enough here. As of today, we’re changing our policy to ban all cub porn. This means the sexualization of minor ban now extends to non-humanoid animals and mythological creatures as long as they appear to be underage. We’re adding “cub” to the list of categories, after lolicon and shotacon, to our Community Guidelines to clarify that this content is not allowed on Discord.
It’s really important to us that the millions of people who use Discord every day can trust our decisions. We want this blog to provide transparency on our processes, and we’re going to continue this by further providing a more in-depth view into our actions as a team with Discord’s first transparency report.
In this report, we want to provide more information about our content moderation outcomes, such as how many actions are taken a month, how many users are actioned, and what is causing their removal from the platform.
We think that transparency is good. It will shine more light on the work we do, help maintain accountability, and guide conversations about where we should be spending our time to get better. We’re looking to release our first report by the end of April, and want to continue releasing them quarterly after that.
These last couple of weeks, we received a lot of feedback about our policies and decisions. We’ve also received death threats and personal attacks directed at those who put their heart and soul into keeping you all safe on Discord.
As time goes on and Discord grows even larger, there will likely be more situations where reasonable people may disagree on the best policy to have. When this happens, we hope to engage in constructive dialogue, not personal attacks or threats.
Lastly, we’ve always taken feedback on all of our decisions, from what features to build all the way to the policies that govern what is acceptable conduct on Discord. We hope this blog shows you how our Trust and Safety team keeps Discord a safe place to bring people together around games.
We look forward to continuing this dialogue with you.