Transparency continues to be an important step towards our goal of building trust with our users. Our biannual reports are a way to demonstrate our commitment to keeping Discord safe.
The pace of our work accelerated in the first half of 2021 as we grappled with the task of moderating our growing, evolving platform. Trust & Safety continued to invest in resources to respond to the growth that has resulted from the COVID-19 pandemic. The team worked to scale reactive operations and improve methods to proactively detect and remove abuse.
Our latest report includes data on the actions taken by Trust & Safety between the months of January and June 2021. We have refined some of our reporting standards to ensure that we are presenting the most meaningful data in the clearest way possible. We are also detailing for the first time the results of our work to proactively remove cybercriminal groups from the platform.
User reports received
User reports remain an important avenue by which we identify violations of our Community Guidelines. All Discord users have the option to write in to us with reports of violations of our platform rules. We look at each report and investigate the behavior to see if a violation has occurred. It is important to note that not all of the reports submitted will be actionable.
Trust & Safety received 437,190 reports in the first half of 2021 — a total increase of slightly over 80,000 from the previous six months. A look back to 2020 provides helpful perspective into how much the task of content moderation at Discord has grown over the course of the last year. Our report volume in January 2020 was slightly over 25,000. We received over 70,000 monthly reports one year later — and that number continued to climb throughout the first half of 2021.
The continued growth of our user base may explain the rise in user reports: We welcomed many more people to Discord since the beginning of 2020, and in September 2021 we announced that our platform has 150 million monthly active users. Heightened usage of our service during the COVID-19 pandemic may also have compelled more users to submit reports.
Report category breakdown
We have maintained most of the category names and groupings from our last report. The biggest change is our shift away from using the term “Not Safe for Work” (NSFW). The category we previously referred to as NSFW has been renamed in this report to Graphic Content. It encompasses gore, animal cruelty, sexually explicit content, and unsolicited pornography.
We observed the continuation of a number of trends in reports during the first six months of 2021. Harassment continued to be the most commonly reported. High-harm areas like CSAM and Extremism continued to make up comparably small fractions of our overall volume.
The Extremist or Violent Content category jumped noticeably from 4,935 reports in the second half of 2020 to 13,102 in the first half of 2021. It is again worth noting that not all of these reports were actionable — and many may have ended up being reclassified in lower-harm categories.
One other notable jump was in Spam. This category went from 25,672 reports in the second half of 2020 to 58,215 in the first half of 2021 — making it the second-most-common report type.
Report action rates
Trust & Safety works hard to ensure that the most actionable and sensitive reports are routed to and resolved by the appropriate subject matter experts. Our action rates for CSAM and Self-Harm reports increased between January and June 2021. CSAM shifted from a 33% action rate in our previous report to 45%; and Self-Harm moved from slightly under 13% to 23% in this period. We will continue to treat child-harm cases and threats to life with the utmost care.
Cybercrime had the greatest number of report actions of any category in the second half of 2021 — and it also accounted for one of the highest overall action rates. The most actionable reports during this period were for the high-harm categories of CSAM and Exploitative Content.
The two categories that received the most reports had some of the lowest action rates: About 17% of all Spam reports were actionable, and Harassment was actioned just 13% of the time.
A list of the actions taken by Trust & Safety can be found on our Safety Center. Although not all reports will result in actions against a user or server, the efforts of our user base to fight abusive conduct will always be welcome, and we will continue to action reports that violate our policies.
Actions by Trust & Safety
Account deletions
Trust & Safety removed 470,465 non-Spam accounts between January and June 2021. This was a significant rise from the 266,075 non-Spam deletions during the second half of 2020.
Exploitative Content made a particularly large contribution to this overall rise. The category went from around 130,000 removals in the second half of 2020 to 238,019 in the current period. More details about our proactive work to remove Exploitative Content violations can be found below.
Some of the other notable increases came in categories where Trust & Safety has done sustained proactive work. Cybercrime account deletions rose from around 15,000 in the previous reporting period to slightly over 40,000 in the current period. CSAM deletions also jumped from about 23,000 in the second half of 2020 to nearly 52,000 in the first half of 2021.
Spam again contributed the greatest number of deletions of any one category. Trust & Safety has continued to make advances with techniques that will proactively stop spam. 19,744,476 accounts were removed for spamming and spam-related offenses during the first half of 2021. This includes account deletions as well as other steps to halt actions by spam accounts.
Server deletions
Trust & Safety removed 43,044 servers for violations of our Community Guidelines during the first half of 2021 — up from 27,410 servers in the second half of 2020. Cybercrime and Exploitative Content were again the most frequent reasons for server removal. The jump in Cybercrime from slightly under 6,000 in the previous period to just over 8,000 in the current period likely owes to our sustained proactive work to keep Discord free from illegal activities.
Exploitative Content server removals rose from 5,103 to 11,450. This increase of about 124% partly owes to the inclusion of two new subcategories (see below) — but it also speaks to Trust & Safety’s commitment to keeping exploitative and illegal content off the platform.
Account and server warnings
Trust & Safety can issue warnings to accounts or servers for lower-harm violations of our Community Guidelines. Warnings are frequently intended to correct problematic behaviors that do not call for permanent removal from the platform. We do not issue any warnings for CSAM or Cybercrime — and we also do not warn servers for Hacks and Cheats or Self-Harm Concerns.
Account and server warnings during the first half of 2021 both decreased from the previous six months. The total number of account warnings dropped from 28,170 to 23,918 in the current period. The previous server-level total of 3,999 decreased to 3,884 in the first half of 2021.
The downward trend in warnings is interesting to observe in conjunction with the rise in deletions. It suggests that Trust & Safety handled a greater number of cases in the first half of 2021 with content that warranted immediate removal from the platform. Warnings nevertheless remain an important way for us to supplement more permanent remediative actions.
Reports to NCMEC
Discord has a zero-tolerance policy for child-harm content. Users who upload abuse material of minors to Discord are reported to the National Center for Missing and Exploited Children (NCMEC) and are promptly removed from the service. We have continued to partner with NCMEC to ensure that grooming and endangerment cases are quickly escalated to authorities.
Trust & Safety reported a total of 9,497 accounts to NCMEC between January and June 2021. The large majority of these reports continued to be for child-harm images flagged through PhotoDNA hashing software. 150 of our reports were for higher-harm grooming or endangerment situations — an increase of roughly 80% from the previous six-month period.
This increase paralleled a larger rise in media reports to NCMEC. Trust & Safety continually works to build innovative tools that allow us to find more instances of reportable abuse material and activity. The team also worked hard since the last reporting period to increase staffing levels to ensure that all child-harm-related escalations are given the time and attention they deserve.
We remain steadfastly committed to keeping child-harm content off the internet. Discord has begun actively participating in a cross-industry collaboration organized by the Technology Coalition. We have already gained tremendous insights from other industry leaders — and we are optimistic about the opportunities emerging from our partnerships. Please see our Safety Center for resources explaining how parents can work to ensure their teens’ safety on Discord.
Proactive server deletions
One of our highest priorities continued to be removing high-harm actors before users have to encounter them. Trust & Safety continues to proactively pursue groups engaging in illegal activities or that could otherwise cause significant real-world harm. We have made some additions to the categories we reported on previously to account for new, internal policies.
Cybercrime
Discord does not tolerate any criminal activity conducted or facilitated through our platform. We conceive of Cybercrime as a distinct category involving a set of illegal activities that seek to facilitate fraud or gain unauthorized access to systems and data for malicious purposes. Much of Trust & Safety’s recent proactive work in this area has focused on groups that engage in facilitating payment fraud, malware distribution, and organized account hijacking activities.
Trust & Safety removed a total of 8,180 servers for Cybercrime in the six months between January and June 2021. Although that represents the second-highest total number of server removals in this period (see the “Server deletions” section), we are proud that more than half of these groups were taken down proactively. This speaks to the efficacy of our detection methods as well as our strong commitment to keeping Discord free from harmful, cybercrime activities.
Exploitative Content & CSAM
We expanded our definition of Exploitative Content to cover more groups that violate our Community Guidelines. This umbrella category encompasses sexually explicit material that has the potential to damage subjects in a psychological, reputational, or economic way.
Included in the graphic above are two new Exploitative Content subcategories: Leaks and Sexually Degrading Content (SDC). We remove servers for Leaks if their sole purpose is to share monetized adult content without the creator’s consent. SDC refers to behaviors that sexually degrade, shame, or harass subjects of a piece of media without their consent.
The four subcategories within Exploitative Content (Leaks, Nonconsensual Pornography, Sexualized Content Related to Minors, and SDC) account for a combined total of 9,906 server removals during the first half of 2021 — 4,059 of which were taken down proactively.
Child Sexual Abuse Material is another category of harmful content where we invest a lot of resources and continue to bolster our ability to take that content down proactively. Our proactive removal rate for servers sharing CSAM was even higher during this period: Trust & Safety proactively removed 1,960 (or about 70%) of the 2,767 servers deleted for CSAM.
Violent Extremism
2021 began with the terrifying scene of the storming of the United States Capitol. In the weeks before the event, our team of counter-extremism experts began monitoring the situation, and proactively removed a number of servers involved in discussing and organizing the event in December 2020. The team removed 27 servers and 857 accounts the day of January 6th, 2021. We later followed up with a blog post detailing our work to counter violent extremist ideologies.
A total of 1,834 servers were removed for Violent Extremism in the six months between January and June 2021. We are proud that upwards of 73% of these high-harm spaces were removed before being reported to Trust & Safety. We will continue to invest in proactive tooling and resources to ensure that violent and hateful groups do not find a home on Discord.
The ban appeals process
Ban appeals provide an important opportunity for us to engage with users who have been removed from the platform for violations of our Community Guidelines. The ban appeal process also allows us to audit our work and correct any actions that may have been made in error. Trust & Safety considers the severity of harm from the violative content, the potential for future harm on and off the platform, and whether an individual has grown and learned from their time away.
We received 104,635 non-Spam ban appeals in the first half of 2021. This represents a total increase of about 20,000 from the previous reporting period. Trust & Safety reinstated 3,226 of the 470,465 accounts removed for non-Spam reasons. 461 Spam bans were also manually overturned. The unban rate of many categories dropped significantly from our previous report.
The case studies below are taken from actual user ban appeals received by Trust & Safety during the first half of 2021. We have removed identifying information to respect user privacy.
Case study 1: Participation in doxxing server
A user had their account removed for participating in a server that was dedicated to releasing the personally identifiable information of public figures. They appealed their ban multiple times.
Result: We consider the malicious sharing of personal, private information to be a high-harm violation of our platform rules. Trust & Safety confirmed this user’s actions and left the decision unchanged. We do not allow behavior that directly or indirectly jeopardizes the safety of others.
Case study 2: Sexual comments about minors
A user was reported by moderators for making repeated sexually graphic comments about minors in their server. The individual’s account was permanently suspended by Trust & Safety.
Result: We verified that the comments originated from the appellant before affirming the original judgment. We maintained the ban for this user. Child safety continues to be one of Discord’s highest priorities. We do not allow content sexualizing minors — including all forms of text and media that describe or depict minors in a pornographic, sexually suggestive, or violent manner.
Case study 3: Sexually explicit content in avatar
A user was permanently removed from the platform for having sexually explicit content in their avatar. Trust & Safety investigated the circumstances of the ban after the user appealed.
Result: We discovered that the user had previously had sexually explicit content in their avatar; they had been warned and had the picture cleared; but then they re-uploaded adult content. We maintained the ban for this user. We do not allow deliberate disregard for our platform rules.
Case study 4: Server promoting violent extremism
Trust & Safety’s counter-extremism team proactively removed a server along with all of its members. The server was dedicated to promoting egregious extremist content. One user appealed with the claim that they had joined the server without fully understanding its purpose.
Result: We investigated the user’s claims and found that they had joined the server immediately before its deletion and had not made any posts in the server. The user had been on Discord for several years and had not once been reported to Trust & Safety. They also were not part of any other server that violated our platform rules. We lifted the ban on the account in good faith.
Requests from law enforcement
Discord has a legal obligation to respond to requests from United States law enforcement officers handling criminal cases. Our Legal Operations team scrutinizes all pieces of legal process we receive to make sure they are legally valid, clear, and appropriately scoped.
We received a total of 2,632 pieces of legal process during the first half of 2021. We complied with the large majority of those requests — ultimately finding 2,229 of them both legally valid and specific enough for us to identify a user. We limit disclosures of user information and content so they match the specific circumstances dictated by each request.
Our Legal Operations team received an additional 378 requests for emergency disclosure from law enforcement during this period. These requests originated from law enforcement agencies within and outside the United States. We were able to identify accounts for 306 of these requests, and we disclosed basic subscriber information in 158 of those cases.
We disclose user data to law enforcement absent legal process only when required by an emergency involving danger of death or serious physical injury to any person.
Discord’s investment in safety
One recent step we have taken to improving safety on Discord was the July 2021 acquisition of an AI-based software company called Sentropy. The addition of this team will allow us to expand our ability to detect and remove bad content. This move also signals our continued commitment to combating hate, violence, and forms of harm as our platform grows.
We also launched a Trust & Safety team based out of the Netherlands in October 2021. We believe that context is essential to successful content moderation. Hiring exceptional employees outside the United States, who have both language skills and cultural sensitivity, represents an important part of our effort to bring a global perspective to Discord’s Trust & Safety work.
We expect this international investment to continue in 2022.
Conclusion
These biannual reports remain critically important to us. They allow us to take stock of the work we were able to accomplish over the course of the last six months — and they also provide a valuable opportunity for us to reflect on the areas where we can improve. As a way to further bolster our commitment to transparency, we have begun making semi-regular posts to our Policy & Safety Blog, with details about how we enforce our policies. We are excited to share more updates there and in future reports on our ongoing work to keep Discord safe.