April 14, 2023

Our Response to the Pentagon Leaks

In recent days, U.S. government documents alleged to be classified were shared on Discord servers. The appearance of these documents on our platform was unusual – conversations on Discord generally relate to video games, music, art, sports, and other interests of our users around the world.

Classified military intelligence documents pose a significant, complex challenge for Discord as they do for other online platforms – only authorized government personnel can determine whether a document is classified, unclassified, or even authentic. And currently, there is no structured process for the government to communicate their determinations to platforms like Discord.  

However, what is clear is that unauthorized disclosure of classified government documents violates Discord’s Terms of Service, which prohibit the posting of illegal content on our platform. Because of this, in connection with this incident, we have removed content, terminated user accounts, and are cooperating with the efforts of the United States Departments of Defense and Justice in their investigation of this incident. And, while a suspect has been arrested, we continue to conduct our own internal investigation of these events.

Discord is a communications platform that brings people together over shared experiences. More than 150 million people around the world use our platform every month to stay in touch and spend time with the most important people in their lives.

The safety of all the people and communities that call Discord their online home is a top priority. Posting illegal content or demonstrating racist behavior, both observed in this incident, are not welcome or allowed on Discord.

Below is a summary of what we can share about the recent incident, including actions taken and more details around our practices.

Summary of Current Situation

Because investigations are ongoing, we can only share limited details. What we can say is that the alleged documents were initially shared in a small, invite-only server on Discord. The original server has been deleted, but the materials have since appeared in several additional servers.

Our Terms of Service expressly prohibit using Discord for illegal or criminal purposes. This includes the sharing of documents that may be verifiably classified. Unlike the overwhelming majority of the people who find community on Discord, this group departed from the norms, broke our rules, and did so in apparent violation of the law.

Our mission is for Discord to be the place to hang out with friends online. With that mission comes a responsibility to make Discord a safe and positive place for our users. For example, our policies clearly outline that hate speech, threats and violent extremism have no place on our platform. When Discord’s Trust and Safety team learns of content that violates our rules, we act quickly to remove it. In this instance, we have banned users involved with the original distribution of the materials, deleted content deemed to be against our Terms, and issued warnings to users who continue to share the materials in question.

Our Approach to Delivering a Positive User Experience

This recent incident fundamentally represents a misuse of our platform and a violation of our platform rules. Our Terms of Service and Community Guidelines provide the universal rules for what is acceptable activity and content on Discord. When we become aware of violations to our policies, we take action.

The core of our mission is to give everyone the power to find and create belonging in their lives. Creating a safe environment on Discord is essential to achieve this, and is one of the ways we prevent misuse of our platform. Safety is at the core of everything we do and a primary area of investment as a business:

  1. We invest talent and resources towards safety efforts. From Safety and Policy to Engineering, Data, and Product teams, about 15 percent of all Discord employees are dedicated to working on safety. Creating a safer internet is at the heart of our collective mission.
  1. We continue to innovate how we scale safety mechanisms, with a focus on proactive detection. Millions of people around the world use Discord every day, the vast majority are engaged in positive ways, but we take action on multiple fronts to address bad behavior and harmful content. For example, we use PhotoDNA image hashing to identify inappropriate images; we use advanced technology like machine learning models to identify and remedy offending content; and we empower and equip community moderators with tools and training to uphold our policies in their communities. You can read more about our safety initiatives and priorities below.
  1. Our ongoing work to protect users is conducted in collaboration and partnership with experts who share our mission to create a safer internet. We partner with a number of organizations to jointly confront challenges impacting internet users at large. For example, we partner with the Family Online Safety Institute, an international non-profit that endeavors to make the online world safer for children and families. We also cooperate with the National Center for Missing & Exploited Children (NCMEC), the Tech Coalition, and the Global Internet Forum to Counter Terrorism.

The fight against bad actors on communications platforms is unlikely to end soon, and our approach to safety is guided by the following principles:

  • Design for Safety: We make our products safe spaces by design and by default. Safety is and will remain part of the core product experience at Discord.
  • Prioritize the Highest Harms: We prioritize issues that present the highest harm to our platform and our users. This includes harm to our users and society (e.g. sexual exploitation, violence, sharing of illegal content) and platform integrity harm (e.g. spam, account take-over, malware).
  • Design for Privacy: We carefully balance privacy and safety on the platform. We believe that users should be able to tailor their Discord experience to their preferences, including privacy.
  • Embrace Transparency & Knowledge Sharing: We continue to educate users, join coalitions, build relationships with experts, and publish our safety learnings including our Transparency Reports.

Underpinning all of this are two important considerations: our overall approach towards content moderation and our investments in technology solutions to keep our users safe.

Our Approach to Content Moderation

We currently employ three levers to moderate user content on Discord, while mindful of user privacy:

  • User Controls: Our product architecture provides each user with fundamental control over their experience on Discord including who they communicate with, what content they see, and what communities they join or create.
  • Platform Moderation: Our universal Community Guidelines apply to all content and every interaction on the platform. These fundamental rules are enforced by Discord on an ongoing basis through a mix of proactive and reactive work.
  • Community Moderation: Server owners and volunteer community moderators define and enforce norms of behavior for their communities that can go beyond the Discord Community Guidelines. We enable our community moderators with technology (tools like AutoMod) as well as training and peer support (the Discord Moderator Academy).

There is constant innovation taking place within and beyond Discord to improve how companies can effectively scale and deliver content moderation. In the future, our approach will continue to evolve, as we are constantly finding new ways to do better for our users.

Our Technology Solutions

We believe that in the long term, machine learning will be an essential component of safety solutions. In 2021, we acquired Sentropy, a leader in AI-powered moderation systems, to advance our work in this domain. We will continue to balance technology with the judgment and contextual assessment of highly trained employees, as well as continuing to maintain our strong stance on user privacy.

Here is an overview of some of our key investments in technology:

  • Safety Rules Engine: The rules engine allows our teams to evaluate user activities such as registrations, server joins, and other metadata. We can then analyze patterns of problematic behavior to make informed decisions and take uniform actions like user challenges or bans.
  • AutoMod: AutoMod allows community moderators to block messages with certain keywords, automatically block dangerous links, and identify harmful messages using machine learning. This technology empowers community moderators to keep their communities safe.
  • Visual Safety Platform: This is a service that can identify hashes of objectionable images such as child sexual abuse material (CSAM), and check all image uploads to Discord against databases of known objectionable images.

Our Partnerships

In the field of online safety, we are inspired by the spirit of cooperation across companies and civil society groups. We are proud to engage and learn from a wide range of companies and organizations including:

  • National Center for Missing & Exploited Children
  • Family Online Safety Institute
  • Tech Coalition
  • Crisis Text Line
  • Digital Trust & Safety Partnership
  • Trust & Safety Professionals Association
  • Global Internet Forum to Counter Terrorism

This cooperation extends to our work with law enforcement agencies. When appropriate, Discord complies with information requests from law enforcement agencies while respecting the privacy and rights of our users. Discord also may disclose information to authorities in emergency situations when we possess a good faith belief that there is imminent risk of serious physical injury or death. You can read more about how Discord works with law enforcement here.


Lorem Ipsum is simply