March 15, 2024

Our Approach to Content Moderation

As part of our commitment to keeping users safe on Discord, we want to provide a clear understanding of what’s happening in user spaces and how we moderate the content posted there. The goal is to keep you in the loop on how Discord identifies and removes unwanted content and conduct so you can go about your fun business. 

Our Community Guidelines and Terms of Service support our principle that everyone can express themselves and spend time with their friends on Discord, but not at the expense of anyone else. Our goal is to stop bad actors and their activity before anyone else encounters it. Let’s take a look at how we do that.

Moderation across Discord

All users have the ability to report behavior to Discord. User reports are processed by our Safety team for violative behavior so we can take enforcement actions where appropriate.

Discord also works hard to proactively identify harmful content and conduct so we can remove it and therefore expose fewer users to it. We work to prioritize the highest harm issues such as child safety or violent extremism.

We use a variety of techniques and technology to identify this behavior, including:

  • Image hashing and machine-learning powered technologies that help us identify known and unknown child sexual abuse material. We report child sexual abuse and other related content and perpetrators to the National Center for Missing & Exploited Children (NCMEC), which works with law enforcement to take appropriate action. 
  • Machine learning models that use metadata and network patterns to identify bad actors or spaces with harmful content and activity.
  • Human-led investigations based on flags from our detection tools or reports.
  • Automated moderation (AutoMod), a server-level feature that empowers community moderation through features like keyword and spam filters that can automatically trigger moderation actions.

In larger communities, we may use automated means to proactively identify harmful content on the platform and enforce our Terms of Service and Community Guidelines

If we identify a violation, we then determine the appropriate intervention based on the content or conduct and take action such as removing the content or suspending a user account. The result of our actions are conveyed to users through our Warning System.

Creating a safe teen experience

In addition to all the measures we've outlined above, we add layers of protection for teen users.

Our proactive approach for teens has additional layers of safe keeping because we believe in protecting the most vulnerable populations on Discord.

For teens, we may monitor the content or activity associated with teen accounts to identify possible situations of unwanted messages through Teen Safety Assist. For example, if we detect potentially unwanted DM activity directed at a teen account, we may send a safety alert in chat to give the teen account information about how they might handle the situation. (For messages between adults, we do not proactively scan or read the content by default.)

Together we are building a safer place for everyone to hang out. To stay up to date on our latest policies check out our Policy Hub.

Tags:
Moderation
Policy
Privacy
Transparency

Lorem Ipsum is simply