Discord

Note: The graph relating to “Proactive removals” was updated in April 2021. The proactive server deletions were initially overreported. Corrected totals are provided in the new graphic.

Hi again!

A lot has happened on Discord since our last transparency report. We’d like to spend some time catching you up on the biggest changes and how we’ve responded to them.

In May, we refreshed our Community Guidelines. This new document clarifies the kind of behavior and content that is — and isn’t — allowed on Discord. It clarifies some of our existing policies so that they more clearly reflect what we stand for. We wanted it to sum up all the values we hold near and dear: treating each other with respect, treating your communities with respect, and keeping Discord a safe and friendly place for everyone.

Later in June, we rolled out our new Safety Center. This is a great resource for everyone, with tips on how to keep your account safe, articles explaining Discord for parents and educators, and a clear breakdown of how we enforce our policies. In the future, we plan to add more how-to articles to address common issues and resolutions to them. We’re also looking to add more case studies, policy documentation, and resources on community moderation.

Finally, the biggest change since our last transparency report has been COVID-19 and the growth it has created for our service. With more people staying home, many new people joined Discord to talk to their friends and seek out communities. In June we announced that Discord has grown to more than 100 million monthly active users. As you might expect, as we welcomed more and more people to Discord, there was a concomitant rise in reports created.

This graph shows the month-over-month increase in reports received by Trust & Safety in the first half of 2020. The reports we received almost doubled in the last few months.

As Discord grows, one of our top priorities is to help everyone continue to have a safe experience. For that reason, the number of employees on our Trust & Safety team almost doubled in the last six months, continuing as one of the biggest teams in the company. We’re working hard to make sure that anyone at any time can write in, have their issue looked at, and get the help they need.

All the numbers (and announcing a new category)

In keeping with our previous reports, we’ll detail the breakdown of reports by category, the rates at which we action violations of our Terms or Guidelines, the total number of actions taken by Trust & Safety, and the rate at which we restore accounts.

One of the new things we want to spend some time on in this report is user warnings and their effectiveness. Warnings are useful in situations not involving a serious threat of harm because they present an opportunity for user education without us taking permanent action on an account. They’re a good step towards increasing the variety of tools in our toolset, and as Discord continues to grow, we’ll continue to explore new ways of tackling abuse.

We want to make each new transparency report more useful and accessible than the last, so we’ve updated how we’re presenting information this time around. The first is the category breakdown: We’ve standardized the way we categorize violations. In each graph, you can expect to see the same 11 categories. Broadly speaking, these are representative of all the different kinds of reports that come in to Trust & Safety, and we feel they paint a clear picture of the behavior and content we’re working to keep off Discord.

To that end, we may update our categories in future reports to better reflect what we see and what users report to us. For this report, we’ve introduced a new category: cybercrime. Cybercrime covers social engineering, fraud, scams, and things like distributed denial of service (DDoS) attacks. Our previous hacks and cheats category is now exclusive to malicious behavior in online games, such as account cracking and distribution of cheats.

Reports received from users

Looking back at the second half of 2019, Trust & Safety received about 128,000 user reports. In the first six months of 2020, we’ve shot way past that number, up to roughly 235,000 reports, driven by the number of people on the platform and the increased time people spent on Discord during the pandemic. Here’s the breakdown of violations by category:

This is inbound. These aren’t confirmed instances of bad behavior or objectionable content on Discord, but rather all the reports we received and investigated.

Next is a graph that shows the action rate of reports by category:

As you can see, there’s a correlation between ease of verification and the actionability of abuse. Spamming on Discord most often looks like unsolicited direct messages telling a user to join a server or asking them to add a bot to a server — free Nitro bots and cryptocurrency scams have been most common in the last six months — and have a high actionability rate. Exploitative content and doxxing reports are also up there. With those categories, it’s often immediately obvious when a violation has occurred, and identifying motives is relatively straightforward.

On the other side of the actionability scale is harassment. Harassment is the largest category of reports received but actually ends up being the least actionable category. This should also make sense as unlike doxxing, for example, there’s a variance in what individuals consider harassing behavior.

We get some reports of malicious ban evading and flooding of friend requests, which generally are actionable, but a lot more of someone calling someone else names, or even filing a report under harassment and simply stating that they don’t like the other person or what they say, which is not actionable by Trust & Safety and is more appropriately addressed by the complainant blocking a user they don’t like.

Taking action on users and servers

Users banned

In addition to getting more reports than ever before, we’re also taking more action than before. Let’s look at the accounts deleted for violating our Terms or Guidelines:

This chart represents the total user bans in the first half of 2020. This chart isn’t very helpful in understanding anything except that there’s a lot of spam: in the first half of 2020, we banned 4,083,444 accounts for spam. Here’s a more helpful chart that excludes spam:

Raiding may stand out as being somewhat high, but there’s a reason for this. While most other reports are about individual activity, raiding is generally coordinated activity by a group. When users decide to raid, they join another server all at once and spam disruptive comments, generally ones that are against the Community Guidelines. When we have confirmed an instance of raiding, we will take action against all the group members who participated. Only 17% of the reports we receive are about raiding, but as you can see, it results in a higher number of bans, because we’ll take action on all the users responsible for the activity.

And you’ll see that exploitative content is the reason for the highest number of bans, even though this category only makes up about eight percent of reports received. This is because the majority of exploitative content posted to Discord is deleted proactively, without users having to report the content (we’ll address this later in the report). Harassment, on the other hand, makes up about a third of the reports received but accounts for a smaller fraction of users actioned.

Users warned (and the effectiveness of warning)

Sometimes we’ll see behavior in a report that violates our Terms and Guidelines but isn’t severe enough to warrant permanent account deletion. In these instances, Trust & Safety can issue a warning, which will send a message letting the user know they’ve been warned and which rule they broke. Warnings may also be accompanied by a temporary ban, ranging from one hour to two weeks, proportionate to the severity of the violation of our Community Guidelines.

For the first time ever, we’re publishing the total number of warnings given.

If you look at this graph next to user deletions, it’s clear that there are fewer warnings than deletions. The logic behind this is that in most cases there’s a clear and deliberate attempt to violate our policies . For example, if a user says “Get doxxed” and posts personally identifiable information that calls for a deletion and not a warning. But we often give users the benefit of the doubt in cases where the intent is not certain.

One of the categories we warn for most frequently is NSFW content outside of an NSFW gate. A user with a profile picture with adult content goes against our Community Guidelines because the image can’t be age-gated and is therefore viewable to anyone who connects with the user. Instead of deleting the account though, Trust & Safety will issue a warning, clear the avatar, and allow the user to upload a more appropriate picture.

Our numbers show that these actions are effective as a means to course-correct users. The chart below calculates the percentage of warned users who go on to be deleted for the same reason: fewer than three percent of these users go on to have their accounts banned for the same reason. As our platform continues to grow, we believe in using many tools to address abuse, including both user education and other actions.

We also understand that not every tool is fit for every situation. In categories such as cybercrime and exploitative content, Trust & Safety rarely issues warnings. With the latter especially, which includes child exploitation, we take a particularly hard stance against those posting the content. In the next section, we’ll say a bit more about our efforts to proactively detect and delete users who post exploitative content.

Servers banned

Sometimes, taking action on an individual isn’t enough: when a server’s sole purpose revolves around behavior or content that violates our Terms or Guidelines, we will take action on the entire community. Generally, this will only affect the server itself and the owner and moderators of the server, but in rare cases, it may include all participants. The chart below shows the number of servers removed in the first six months of 2020.

Part of our goal in showcasing actions on servers is to illustrate the bigger picture. A keen eye might notice there’s a difference in the charts between users and servers actioned, and that the highest numbers on one are not generally the highest on the other. This is because abuse on Discord comes at different levels, and we address it at both the server and user level.

For example, while the number of individuals we delete for hacks and cheats or cybercrime is generally low, the number of servers we remove for the same reason is comparatively high. We ban for these reasons much more often at the server level because these activities are more easily coordinated in servers, and the harm comes from the server existing as a whole. And in other situations, like doxxing, it’s less likely that the doxxing is the result of a server dedicated to it and more likely simply a user, so the ratio is closer.

Finally, in cases of exploitative content — again, this is non-consensual pornography and child exploitation — we action all accounts in the server, so the ratio is the other way around.

Proactive removals: Doing more to keep Discord safe

As an earlier blog post by our co-founders notes, we are deeply committed to racial equality. Part of that for us means making sure racist and extremist groups stay off Discord. As the Black Lives Matter protests swept through the country, Trust & Safety ramped up efforts to ban groups seeking to interfere with the peaceful protests. We’re very proud to have shut down some of the worst-of-the-worst organizing groups before they could cause harm. In terms of hard numbers, as the first half of 2020 got underway, the ratio of proactive to reactive extremist servers removed climbed as high as 5:1.

Alongside these efforts, we continued to remove all exploitative content. This category groups together child sexual abuse material and nonconsensual pornography. As noted in our last transparency report, when child-harm material is uploaded to Discord, we quickly report it to the National Center for Missing and Exploited Children and action the account sharing it. We have a similar zero-tolerance attitude towards nonconsensual images on Discord: these servers are removed, in most cases without users having to stumble into them.

In the graph below, we’ve totaled the number of extremist and exploitative server deletions for the first six months of 2020. These volumes show a slight uptick from our proactive deletions in 2019. As we move into the second half of 2020, we will continue to monitor and deplatform extremist groups organizing on Discord as well as any users sharing exploitative content.

A major success we had in the first half of 2020 was the complete dismantling of a network of servers sharing nonconsensual pornography. Our proactive team identified an entire network of 700 servers and systematically removed the content and communities involved. When those users attempted to return to Discord, they were again identified and removed. Nonconsensual pornography has no place on our platform, and we’ll continue to take swift action against these communities and their members.

One final note on our recent events: as the COVID-19 pandemic spread around the world, the Trust & Safety team spun up a taskforce to investigate potential servers spreading dangerous misinformation about health and disease. Across our efforts, we found that there was no significant degree of misinformation on Discord, and instead that Discord was being used as hoped — to help build communities that support and encourage others while we face this unprecedented global challenge. In the future, we will continue to deploy strike teams to target other areas where we see the possibility of misinformation that has a high chance of resulting in real-world harm.

Appeals and account restorations

The final piece we will cover is account restorations. Anyone who has been permanently banned for violating our Terms or Guidelines is welcome to write to Trust & Safety and have their ban reviewed. The chart below displays the rate at which we unban accounts by category.

For context, in terms of aggregate numbers, in 2020 (and setting aside spam):

  • We banned 252,768 accounts.
  • On those accounts, we received 70,349 appeals.
  • On those appeals, we reversed our action and unbanned 3,035 accounts.
  • Overall, the rate of reversal was about 1.2%.

For spam, we took action on just over 4 million accounts, received 45,046 appeals, and unbanned about 18,000 accounts, which results in a reversal rate of 0.44%.

The “Other” category may stand out as being quite high, but it represents one of the smallest volume of unbans: only 289 user restorations in the entire first half of 2020.

We will consider lifting an account ban under a few circumstances:

  • First, if a mistake has been made when originally actioning the account. While we investigate every situation thoroughly, we are human, there are tens of thousands of reports and many of them require urgent attention, and so mistakes may happen.
  • Second, if the appeal reveals information or context that substantially changes our understanding of the situation.
  • Third, if the activity that led to the ban is low-harm, the potential for a repeat offense appears to be low, and the user takes responsibility for their actions.

The third situation is the most likely and leads to the overwhelming majority of our unbans. It is generally our belief that most individuals deserve second chances for their actions, and if they understand their actions and pledge to do better, we believe they should have another chance.

What’s next for Trust & Safety

The last six months were a whirlwind for both the company and Trust & Safety. Our top priority has been making sure Trust & Safety scales with the needs of users and can continue to keep everyone safe on Discord. We received almost double as many reports in June as we did in January, and we’ve been working across the board on internal tools, machine learning, and hiring and onboarding new Trust & Safety employees in order to make sure we are able to keep up with all the new Discord users. We’re proud of where we stand today, but we also know there is more work to be done.

We’re diving into the second half of the year with a number of projects that will make Discord safer still. Some of those will operate behind the scenes and reduce spam, some of them will make user reporting easier, and others are targeted at helping out our moderators. We look forward to sharing more with you in blogs over the next few months.

Finally, we plan to publish these reports every six months. The rest of this year will be covered in our next report in January 2021. With every transparency report, we seek to provide more information (like our breakout of warnings that we featured this time) and continue to make them accessible and interesting.

Thanks for making Discord your place to talk — we’re excited to continue the conversation with you.

Appendix

User deletions in H1 2020

  • Cybercrime: 8,425
  • Hacks and Cheats: 7,874
  • Doxxing: 5,064
  • Harassment: 38,764
  • Exploitative Content: 162,621
  • NSFW: 4,601
  • Malware: 1,657
  • Other: 3,439
  • Platform: 4,311
  • Spam: 4,083,444
  • Raiding: 16,012

User warnings in H1 2020

  • Hacks and Cheats: 46
  • Doxxing: 528
  • Harassment: 3,491
  • NSFW: 2,462
  • Malware: 99
  • Other: 1,301
  • Platform: 958
  • Spam: 720
  • Raiding: 599

Server bans in H1 2020

  • Cybercrime: 2,481
  • Hacks and Cheats: 5,682
  • Doxxing: 291
  • Harassment: 3,355
  • Exploitative Content: 5,420
  • NSFW: 386
  • Malware: 68
  • Other: 564
  • Platform: 51
  • Spam: 1,035
  • Raiding: 309
Contents
THE AUTHOR
MORE FROM