Discord

A few months ago we promised to keep the conversation going about Trust and Safety here at Discord. Today, we’re proud to show you our first-ever Transparency Report. While it took longer than expected, we’re proud to show what making tough decisions looks like on a platform of over 250 million registered users.

Wait — what’s a Transparency Report?

A Transparency Report is a way for us to make visible the reports we receive from our users about bad behavior on the platform and how we respond to them.

We believe that publishing a Transparency Report is an important part of our accountability to you — it’s kind of like a city’s crime statistics. We made it our goal not only to provide the numbers for you to see, but also to walk you through some obstacles that we face every day so that you can understand why we make the decisions we do and what the results of those decisions are.

First, some numbers

T&S receives more than a thousand reports daily and over seven thousand reports a week for violations of our Community Guidelines. This Transparency Report covers the reports Discord received and actions taken from January 1 through April 1, 2019:

In these three months, we received a total of about 52,400 reports — that’s 52,400 times that users reached out to us to get help. When users make reports to us, they can select from the following categories in our Help Center, presented here along with examples to illustrate issues we may receive.

  • Harassment: A user repeatedly creates new accounts to message and insult someone who’s already blocked them; members of a server target a user and keep sending them racial slurs (hate speech).
  • Threatening Behavior: One user threatens another, whether explicitly (“I’m going to shoot you”) or implicitly (“you better watch out or something bad’s going to happen to you”).
  • Hacks/Cheats: A server is selling stolen accounts for popular games, or distributing cheats for multiplayer games.
  • Doxxing: A user shares the address of another user’s school, even encouraging others to swat that person.
  • Malware: A user DMs another user a trojan virus disguised as a game patch file in order to take control of their computer.
  • Exploitative Content: A user discovers their intimate photos are being shared without their consent; two minors’ flirting leads to trading intimate images with each other
  • NSFW (Not Safe for Work) Content: A user joins a server and proceeds to DM bloody and graphic violent images (gore) to other server members.
  • Self-Harm: A user is depressed and describes not having a good time in school and not having any friends; a user is explicit about committing an act of self harm.
  • Raiding: A group of users coordinate joining a new server, spamming insults and @everyone mentions at the same time, disrupting that community’s discussion.
  • Spamming: A user — or more likely, an automated spambot — joins a large server and then DMs everyone in that server a meet-someone-now dating site.
  • Other TOS Violation: Users select this category when they’re not sure what to select. For example, a user attempts to compromise another user’s account, or impersonates a Discord staff member, or perhaps someone violates multiple Terms of Service at one time.

These categories aren’t set in stone. One of the things that we continue to work on is ensuring people can easily submit reports as needed and that they know what information is required for us to complete an investigation. We’re currently looking at iterating on these categories and making it more more intuitive to report.

What happens after Discord receives a report?

Trust and Safety Investigates

In the investigation phase, the Trust and Safety team acts as detectives, looking through the available evidence and gathering as much information as possible. This investigation is centered around the reported messages, but can expand if the evidence shows that there’s a bigger violation — for example, if the entire server is dedicated to bad behavior, or if the behavior appears to extend historically. We spend a lot of time here because we believe the context in which something is posted is important and can change the meaning entirely (like whether something’s said in jest, or is just plain harassment).

Taking Action

If Trust and Safety can confirm a violation, the team takes steps to mitigate the harm. The following are actions that we may take on either users and/or servers:

  • Removing the content
  • Warning users to educate them about their violation
  • Temporarily banning for a fixed amount of time as a “cool-down” period
  • Permanently banning users from Discord and making it difficult for them to create another account
  • Removing a server from Discord
  • Disabling a server’s ability to invite new users

Discord also works with law enforcement agencies in cases of immediate danger and/or self-harm.

Action Breakdown

Any time Trust and Safety acts on a report, we keep a record of that action. Furthermore, we also track the percentage of reports we take action on:

A note on the graph above: while this shows how often we took action in each category, the results aren’t always visible to the reporter, and may not be the outcome that users expect or hope for when reporting violations to us.

Sometimes a user will follow up to ask what actions we’ve taken, but for many reasons we generally can’t share those specifics. For example, imagine a scenario in which a user faked their own suicide to end an internet friendship (this is a real situation that happens surprisingly often). Their concerned friend reports this user’s messages to us, but in the course of our investigation, we determine that this person is perfectly fine and chatting with their pals on another Discord account. For privacy reasons, we can’t share this with the concerned reporter. From their perspective, it appears that we’re withholding life-or-death information about their friend; from ours, we believe it is correct to protect the privacy of our users (no matter how well-intentioned).

Still, some of those percentages look really low!

There’s a number of reasons that a report may not look like it’s actioned:

  • False or malicious reports: Sometimes users send us edited screenshots or delete their half of the conversation, hoping to twist someone else’s words out of context. Other times, users band together to report an innocent user, hoping we will blindly ban them, as many other platforms use systems that will automatically remove content if it is reported enough times. Discord does not do this.
  • Unreasonable expectations: We may receive a harassment report about a user who said to another user, “I hate your dog,” and the reporter wants the other user banned. We always factor in the severity of harm and whether the issue can be resolved using built-in moderation tools. In these cases, educating the user on how to handle less-than-ideal experiences on their own goes a long way. Being offended isn’t the same thing as being harassed and we think it’s important as a platform to differentiate between the two.
  • Missing information: We require users to send message links so we can locate the violation and investigate the context. Unfortunately, we often get reports where we don’t have any evidence and are only going on hearsay. Because one of our values is to confirm any violation before taking action, we generally do not act on violations with missing information.
  • Mislabeled reports: While users may categorize their report incorrectly for many reasons, it’s usually because our categories are broad and subject to interpretation (what one user considers “doxxing” may be considered “harassment” to someone else). However, even if a report is mislabeled, we still investigate and review every report.

Ultimately, many actions that we take are not immediately obvious to the reporter. For example, when we ban a user or remove a server, the reporting user isn’t notified. We’re generally not able to talk about individual actions for privacy reasons, which can make it difficult for the reporter to gauge the effectiveness of a report. Also, any of the above scenarios will cause a report to be “not actioned” for the category it’s submitted to us as, which is why our action rates don’t tell the entire story.

Accounts banned in the first three months of 2019

In the first three months of 2019, Discord banned the following accounts per category in response both to user reports and proactive work on Terms of Service violations:

For perspective, spam accounted for 89% of all account bans and is over eight times larger than all of the other ban categories combined.

Thanks to smart computers and automation, we have been able to stop 100,000 spambots from registering daily. We’re also continuously improving our anti-spam systems.

Moreover, we’ve been spending significant resources on proactively handling Exploitative Content, which encompasses non-consensual pornography (revenge porn/“deep fakes”) and any content that falls under child safety (which includes child sexual abuse material, lolicon, minors accessing inappropriate material, and more). We think that it is important to take a very strong stance against this content, and while only some four thousand reports of this behavior were made to us, we took action on tens of thousands of users and thousands of servers that were involved in some form of this activity. T&S frequently updates and iterates on its detection systems for harmful content, including working with other companies solving similar problems and implementing systems such as PhotoDNA in order to detect this content.

Amongst all other categories, we banned only a few thousand users total. Relative to our 50 million monthly active users, abuse is quite small on Discord: only 0.003% of monthly active users have been banned. When adjusted for spam, the actual number is ten times smaller at 0.0003%.

Appeals

Every user can appeal actions taken against their accounts. We think that appeals are an important part of the process. Just as you deserve a chance to be heard when action is taken against you in a judicial system, you should have such a chance to be heard when some action is taken against your Discord account.

While we weren’t able to prepare the number of appeals made and total granted in this transparency report, we’re looking to add it in future reports. However, through a recent audit of a thousand harassment bans through the first three months of 2019, only two were overturned. Our system, with its emphasis on manual review of a messages context stresses being confident before we take action on an account, so we expect this number to be quite low. We’re not perfect and mistakes may happen even though we go to great lengths to ensure that we’re only taking action when it’s warranted.

Servers banned in the first three months of 2019

Accounts are only a part of the story on Discord because of the way Discord is structured. Communication happens between users in DMs as well as servers.

Individual accounts may be responsible for bad activity, but as part of our investigations, we also take action where there are groups of users who are collectively violating our Community Guidelines with a server dedicated to such content.

Between January and April of 2019, Discord removed the following number of communities for Terms of Service violations:

Over half of all the servers we removed were communities dedicated to sharing game hacks and cheats, or selling cracked accounts.

When Discord removes servers, one thing we focus on is reducing the spread of harm from strength in numbers. Our graph shows that we most commonly see this as compromising gameplay (posting cheats, selling rare items), but we unfortunately also receive reports of servers focused on spreading hate speech, harassing others, and convincing others to follow dangerous ideologies.

Discord takes these reports seriously and removes servers exhibiting extremist behavior. We take a comprehensive approach by reviewing user-generated reports, as well as working with law enforcement agencies, third-parties (such as news outlets and academics), and organizations focused on fighting hate (like the Anti-Defamation League and Southern Poverty Law Center) to make sure we’re up-to-date and ahead of any potential risks.

What the numbers don’t show

Numbers are critical for transparency. But numbers don’t tell the whole story, and they sometimes don’t reflect the nuances of what it means to keep a platform safe.

This was especially true on the evening of March 14th when our Trust and Safety team began receiving reports that a disturbing and violent video had just been live-streamed in Christchurch, New Zealand. At first, our primary goal was removing the graphic video as quickly as possible, wherever users may have shared it. Although we received fewer reports about the video in the following hours, we saw an increase in reports of ‘affiliated content’. We took aggressive action to remove users glorifying the attack and impersonating the shooter; we took action on servers dedicated to dissecting the shooter’s manifesto, servers in support of the shooter’s agenda, and even memes that were made and distributed around the shooting.

Our team worked around the clock to thoroughly review each report, educate users about the harm of this content, and remove any content that violated our Community Guidelines. Over the course of the first ten days after this horrific event, we received a few hundred reports about content related to the shooting and issued 1,397 account bans alongside 159 server removals for related violations.

Closing remarks

Thanks for taking the time to check out our first Trust and Safety Transparency Report for the beginning of 2019. We recognize that this may be a lot to digest and may leave you with questions, so we’d love to hear from you.

We believe that transparency to users should be a norm among tech companies and provide meaningful details and insights, so people can better determine how platforms keep their users safe. We believe that sharing as much as we can will motivate other companies to do the same. The more harm we can bring out of the shadows together, the closer we are to removing harm from the Internet all together.

As we mentioned, this is our first time creating a Transparency Report. Moving forward, we’ll keep iterating and improving and look to publish these reports regularly.

Finally, it’s no small task to keep a platform of more than 250 million users safe for our small team, and we so appreciate every report and all the feedback we receive from our engaged community. It’s motivating to know you’re just as passionate about keeping Discord safe as we are.

If you’re super curious to learn more about Discord, or fighting the bad guys excites you, we’d love for you to join us!

Contents
THE AUTHOR
MORE FROM