When we receive a report from a Discord user, the Trust & Safety team looks through the available evidence and gathers as much information as possible. This investigation is centered around the reported messages, but can expand if the evidence shows that there’s a bigger violation. For example, if the entire server is dedicated to bad behavior, or if the behavior appears to be part of a wider pattern. 

We spend a lot of time in this process because we believe the context in which something is posted is important and can change the meaning entirely, like whether something’s said in jest between friends or is instead plain harassment. We might ask the reporting user for more information to help our investigation. 

Responding to user reports is an important part of Trust & Safety’s work, but we know there is also violating content on Discord that might go unreported. This is where we get proactive. Our goal is to stop bad actors and their activity before anyone else encounters it. We prioritize getting rid of the worst-of-the-worst content because it has absolutely no place on Discord, and because the risk of harm is high. We focus our efforts on exploitative content, in particular non-consensual pornography and sexual content related to minors, as well as violent extremism. 

Please note: We don't proactively scan the contents of users’ private messages — privacy is incredibly important to us and we try to balance it thoughtfully with our duty to prevent harm. However, we scan 100% of images uploaded to our platform using industry-standard PhotoDNA to detect matches to known child sexual abuse material. When we have data suggesting that a user is engaging in illegal activity or violating our policies, we investigate their networks, activity on Discord, and their messages to proactively detect accomplices and determine whether violations have occurred.