You know it. We know it. Spam is a problem on Discord. We treat this issue with the same level of seriousness as any other problem that impacts your ability to talk and hang out on Discord, fine-tuning our anti-spam systems daily to catch more and more spammers and spam content.
Spam sucks. So today, we want to tell you what we are doing to combat it.
Definitions on what “spam” is can vary widely across companies, so let’s lay out how we define spam at Discord: The automatic or centrally operationalized creation or usage of accounts en masse to present users with undesired or malicious content and experiences.
When we categorize spam, we use three distinct groups based on the types of spam based attacks that we work to catch, each with many different sub groups.
Below is a high-level example of what we look for:
Generated accounts compromise the bulk majority of our spam. These are accounts that are created and operated through automation software in order to evade our systems.
Compromised accounts cause some of the highest user-impact spam, as normal users have access removed from their accounts. These accounts are then used to share more spam to unsuspecting users, potentially increasing the compromise further.
Human operated accounts comprise some of our most long-lasting spam actors on the platform. These accounts are real accounts created and operated by real humans.
We have a full and growing team working on spam, but it’s a never-ending game of cat and mouse that also requires us to make sure legitimate users don’t get caught in the crossfire. While we recognize we may never be able to prevent 100% of these accounts from joining your servers, we implement new interventions constantly to make spamming more expensive for bad actors to engage in, and act on an ever-growing percentage of spammers automatically. The more expensive it is for bad actors to engage in spam producing activity, the less likely they are to commit to it.
We want to provide more transparency about our growing efforts at the platform-level to help users avoid and fight spam.
We recently implemented a more prominent Report Spam button in DMs so users can provide us with relevant signal about spam behavior. Thanks to community reporting, our ability to identify bad actors has increased by 1000%, allowing us to more rapidly discover and remove spammers while also improving our automated detection models.
Many spammers are caught after they send only a few messages and, in the most extreme cases, we catch spammers before they are able to send a single message. A recently-launched feature allows for the removal of spam DMs. If we detect or become aware of a spammer, we will warn users when they receive a DM from the spammer and hide the messages.
We are currently testing a system that monitors servers for inauthentic behavior from new members, and proactively puts the server into safe mode, requiring captchas to engage with the community for a period of time.
We also have plans to integrate this functionality into Membership Screening so you don't have to install multiple bots for captchas.
Even if spammers evade our initial layers of detection, we have implemented a suspicious link system that warns a user similar to what you see with Chrome when visiting certain sites.
Our explicit goal for this work is to minimize your exposure to spammers and spam content and, as an added benefit, reduce vectors for account takeover (ATO). This starts with the anti-spam measures we’ve discussed above, but there’s more to come on our spam and ATO efforts in the coming months. We’re also working on additional features for community moderators that we’ll be able to share in more detail soon.
We’re investing heavily in fighting spam from a resourcing, detection and feature perspective to ensure you have a better experience on Discord. You can help us out by reporting any spam you may come across and letting us know how we can make your Discord experience safer and easier by sharing your ideas at our feedback page.