Mental wellness is important for everyone. Discord is partnering with Crisis Text Line, a nonprofit that provides 24/7 text-based mental health support and crisis intervention via trained volunteer Crisis Counselors.
If you report a message for self-harm within the Discord mobile app, you will be presented with information on how to connect with a Crisis Text Line volunteer Crisis Counselor. You can also text ‘DISCORD’ to 741741 to reach Crisis Text Line counselors available 24/7 to help you or a friend through any mental health crisis. Our goal is to enable users to get the help they need for themselves, or empower others to get help, as quickly as possible.
Crisis Text Line is available to those in the United States and is offered in both English and Spanish.
We’re constantly improving and expanding our approach to teen safety and wellbeing on Discord. That’s why we’re proud to support the Digital Wellness Lab at Boston Children’s Hospital to help ground our approach to teen safety and belonging in the latest scientific research and best practices. The Digital Wellness Lab at Boston Children’s Hospital convenes leaders in technology, scientific research, healthcare, child development, media, entertainment, and education to deepen understanding and address the future of how young people can engage healthily with media and technology. We’re excited to work with the Digital Wellness Lab to help develop ways to better support teen mental health, both online and off.
Back-to-school season is a great time for families to rethink and reset their relationship with technology. To help out, Discord is rolling out a new program with National PTA to help families and educators discuss ways to foster positive relationships and build belonging in our digital world. Through the program, free resources will be provided, along with breakout activities and digital safety tips for families. As part of National PTA’s PTA Connected program, Discord will also fund 30 grants for local high school PTAs to pilot the program during the 2022–2023 school year.
Parents, want some help starting a conversation about digital safety with your family or at your school? Check out National PTA’s PTA Connected page for everything you need.
Beyond our work with Crisis Text Line, Digital Wellness Lab, and National PTA, we continue to build out efforts and resources that help people on Discord build belonging and support one another to overcome the impossible. Finding a community that is navigating similar challenges can be incredibly helpful for healing.
That said, platforms have a critical role to play to ensure these digital spaces don’t attempt to normalize or promote hurtful behaviors or encourage other users to engage in acts of self-harm. We recently expanded our Self Harm Encouragement and Promotion Policy to ensure this type of content is addressed on Discord. We do not allow any content or behavior that promotes or encourages individuals to commit acts of self-harm on Discord. We also prohibit content that seeks to normalize self-harming behaviors, as well as content that discourages individuals from seeking help for self-harm behaviors.
We’re committed to mental wellbeing and helping our users uplift each other and their communities. Our newly-implemented partnerships and resources developed with the support of the Crisis Text Line, Digital Wellness Lab, and National PTA are an important addition to our ongoing work to give everyone the power to create space to find belonging in their lives.
Our guidelines state that users may not organize, promote, or engage in the buying, selling, or trading of dangerous and regulated goods. This has always been our policy and will not be changing.
We want to use this opportunity to talk a little bit more about what we consider to be a “dangerous and regulated good” and how this policy will apply:
To help put this policy into context, here are some examples of situations we would and would not take action against under this policy:
We may take any number of enforcement actions in response to users and servers that are attempting to buy, sell, or trade dangerous and regulated goods on Discord — including warning an account, server, or entire moderator team; removing harmful content; and permanently suspending an account or server.
After receiving feedback from our community, we are evaluating the potential for unintended consequences and negative impacts of the age-restricted requirement. Our existing guidelines on Dangerous and Regulated Goods remain as stated, however we are assessing how best to support server moderators regarding the age-restricted requirement and to ensure we are not preventing positive online engagement.
We regularly evaluate and assess our policies to ensure Discord remains a safe and welcoming place, and plan to update our Community Guidelines in the coming months. We will provide any important updates on this policy then.
The full list of content and behaviors not allowed on Discord can be found in our Community Guidelines. Our Safety Center and Policy & Safety Blog are also great resources if you would like to read more about our approach to safety and our policies.
You can read an overview of the following policies here.
Teen self-endangerment is a nuanced issue that we do not take lightly. We want our teen users to be able to express themselves freely on Discord while also taking steps to ensure these users don’t engage in risky behaviors that might endanger their safety and wellbeing.
In order to help our teenage users stay safe, our policies state that users under the age of 18 are not allowed to send or access any sexually explicit content. Even when this kind of content is shared consensually between teens, there is a risk that self-generated sexual media can be saved and shared elsewhere. We want to help our users avoid finding themselves in these situations.
In this context, we also believe that dating online can result in self-endangerment. Under this policy, teen dating servers are prohibited on the platform and we will take action against users who are engaging in this behavior. Additionally, older teens engaging in the grooming of a younger teen will be reviewed and actioned under our Inappropriate Sexual Conduct with Children and Grooming Policy.
Through our thorough work and partnership with a prominent child safety organization, we determined that we will, when possible, warn teens who have engaged in sexually explicit behavior before moving to a full ban. An example of this includes teens sharing explicit content with one another that is not their own.
Discord has a zero-tolerance policy for child sexual abuse, which does not have a place on our platform or anywhere in society.
We expanded our Child Sexual Abuse Material (CSAM) and Child Sexualization policy to clarify the criteria we use to identify child sexualization material to encompass any text or media content that sexualizes children, including drawn, photorealistic, and AI-generated photorealistic child sexual abuse material. The goal of this update is to ensure that the sexualization of children in any context is not normalized by bad actors.
Discord has a zero-tolerance policy for inappropriate sexual conduct with children and grooming. Grooming is inappropriate sexual contact between adults and teens on the platform, with special attention given to predatory behaviors such as online enticement, and the sexual extortion of children, commonly referred to as “sextortion.” When we become aware of these types of incidents, we take action as appropriate, including by banning the accounts of offending adult users and reporting them to the National Center for Missing & Exploited Children (NCMEC), who subsequently work with local law enforcement.
Our Safety team works hard to find and remove abhorrent, harmful content, and take action including banning the users responsible and engaging with the proper authorities.
Discord uses a mix of proactive and reactive tools to remove content that violates our policies, from the use of advanced technology like machine learning models and PhotoDNA image hashing, to partnering with community moderators to uphold our policies and providing in-platform reporting mechanisms to surface violations.
We proactively scan images uploaded to our platform using PhotoDNA to detect child sexual abuse material (CSAM), and report any CSAM content and perpetrators to NCMEC, who subsequently work with local law enforcement to take appropriate action.
Investing in technological advancements and tools to proactively detect CSAM and grooming is a key priority for us, and we have a dedicated team to handle related content. In Q4 2022, we proactively removed 99% of servers found to be hosting CSAM. You can find more information in Discord’s latest Transparency Report.
In addition, we lead our own Trusted Reporter Network for direct communication with expert third parties, including researchers, industry peers, and journalists for intelligence sharing.
We believe that in the long term, machine learning will be an essential component of safety solutions. In 2021, we acquired Sentropy, a leader in AI-powered moderation systems, to advance our work in this domain. We will continue to balance technology with the judgment and contextual assessment of highly trained employees, as well as continuing to maintain our strong stance on user privacy.
Here is an overview of some of our key investments in technology:
Safety Rules Engine: The rules engine allows our teams to evaluate user activities such as registrations, server joins, and other metadata. We can then analyze patterns of problematic behavior to make informed decisions and take uniform actions like user challenges or bans.
AutoMod: AutoMod allows community moderators to block messages with certain keywords, automatically block dangerous links, and identify harmful messages using machine learning. This technology empowers community moderators to keep their communities safe.
Visual Safety Platform: This is a service that can identify hashes of objectionable images such as child sexual abuse material (CSAM), and check image uploads to Discord against databases of known objectionable images.
There is constant innovation taking place within and beyond Discord to improve how companies can effectively scale and deliver content moderation. In the future, our approach will continue to evolve, as we are constantly finding new ways to do better for our users.
We know that collaboration is important, and we are continuously working with experts and partners so that we have a holistic and informed approach in combating the sexual exploitation of children. We’re grateful to collaborate with the Tech Coalition, and NoFiltr to help young people stay safer on Discord. Later this year, Discord will introduce a model into our service to aid in detecting grooming. We believe this is a critical step in keeping teens safe on our platform. This model was developed in collaboration with Thorn with support from the Tech Coalition. We are also proud to announce a new partnership with INHOPE, the global network combatting online CSAM. Through this partnership, we look forward to working closely with global hotlines to ensure we are working together to detect, remove and keep CSAM off of our platform.
Want to learn more about Discord’s safety work? Check out these resources below:
We look forward to continuing this important work and deepening our partnerships to ensure we continue to have a holistic and nuanced approach to child safety.
Hate or harm targeted at individuals or communities is not tolerated on Discord in any way, and combating this behavior and content is a top priority for us. We evolved our Threats Policy to address threats of harm to others. Under this policy, we will take action against direct and indirect threats, veiled threats, those who encourage this behavior, and conditional statements to cause harm.
We also refined our Hate Speech Policy with input from a group of experts who study identity and intersectionality and came from a variety of different identities and backgrounds themselves. Under this policy, we define “hate speech” as any form of expression that denigrates, vilifies, or dehumanizes; promotes intense, irrational feelings of enmity or hatred; or incites harm against people based on protected characteristics.
In addition, we’ve expanded our list of protected characteristics to go beyond what most hate speech laws cover to include the following: age; caste; color; disability; ethnicity; family responsibilities; gender; gender identity; housing status; national origin; race; refugee or immigration status; religious affiliation; serious illness; sex; sexual orientation; socioeconomic class and status; source of income; status as a victim of domestic violence, sexual violence, or stalking; and, weight and size.
Last year, we wrote about how we address Violent Extremism on Discord. We are deeply committed to our work in this space and have updated our Violent Extremism Policy to prohibit any kind of support, promotion, or organization of violent extremist groups or ideologies. We worked closely with a violent extremism subject-matter expert to update this policy and will continue to work with third-party organizations like the Global Internet Forum to Counter Terrorism and the European Union Internet Forum to ensure it is enforced properly.
During 2022, we invested significantly in how we combat criminal activity on Discord. In February, we released two blogs to help users identify and protect themselves against scams on Discord. Alongside these, we developed a Financial Scams Policy which prohibits three types of common scams: identity scams, investment scams, and financial scams. We will take action against users who scam others or use Discord to coordinate scamming operations. In the case of users who have been the victim of fraudulent activity, we recommend they report the activity to Discord and contact law enforcement, who will be able to follow up with us for more information if it helps their investigation.
Under our new Fraud Services Policy, we've expanded our definitions of what constitutes fraudulent behavior. We will take action against users who engage in reselling stolen or ill-gotten digital goods or are involved in coordinated efforts to defraud businesses or engage in price gouging, forgery, or money laundering.
We also do not allow any kind of activity that could damage or compromise the security of an account, computer network, or system under our updated Malicious Conduct Policy. We will take action against users who use Discord to host or distribute malware, or carry out phishing attempts and denial-of-service attacks against others.
User safety is our top priority and we’re committed to ensuring Discord continues to be a safe and welcoming place. We regularly evaluate and assess our policies in collaboration with experts, industry-leading groups, and partner organizations. We look forward to continuing our work in this important area and plan to share further updates down the road.
We invested a substantial amount of time and resources into combating spam in 2022, disabling 141,087,602 accounts for spam or spam-related offenses, with a proactive removal rate of 99%, before we receive a user report.
We observed a significant increase in the number of accounts disabled during the third quarter of 2022, leading to an elevated but lower fourth quarter, and then to a substantial decrease in the first quarter of 2023. These trends can be attributed to both increased and ongoing investments in our anti-spam team and a shift in our approach to reducing the number of disabled accounts by placing greater confidence in our proactive spam mitigations and avoiding false positives. As a result, only .004% of accounts disabled submitted an appeal during the past quarter.
Spammers are constantly evolving and adapting to new technologies and methods of detection, moving to use new parts of the platform to target users. Combating spam requires a multi-faceted approach that employs new tools and technologies, while also remaining vigilant and adaptable to changing trends. We're constantly updating our approach to remove spam, and as our proactive efforts have increased, we've observed a steady decline in the number of reports we receive for spam.
The above trends were made possible over the past year due to a number of technological advances in both back-end and front-end features, products, and tools to help prevent and remove spam from Discord.
Discord is committed to combating spam and building a safer environment for people to find belonging. We've grown our teams, shipped new features, products, and tools, and continue to listen closely to users about how spam impacts their experience on Discord.
We're constantly updating our approach to remove spam and remain vigilant and adaptable to changing trends. We will continue to prioritize safety and work hard to combat spam in order to help make Discord the best place for people and communities to hang out online.
Safer Internet Day was created by advocacy organizations as a way to raise awareness around online safety. It has since expanded into an opportunity for industry, government, and civil society to come together to share online safety and digital well-being best practices.
Making the internet a better, safer place for everyone is a tall task — and we know we can’t do it alone. Last year, as part of our ongoing partnership with the National Parent Teacher Association, we co-hosted an event featuring some of our teen moderators talking about belonging and mental health in online spaces. We’ve continued this partnership with the PTA Connected: Build up and Belong Program. This program helps families explore the use of technology as a communication and relationship tool, discusses ways to build belonging and positive communities in our digital world, teaches how to navigate privacy and safety on digital platforms, and helps families have interactive conversations about online scenarios, experiences and expectations.
We’re committed to helping build a safer internet on Safer Internet Day and beyond. To continue that mission, Discord has partnered with several international leaders in the online safety space.
This year, we’re teaming up with NoFiltr to help educate Discord users on the importance of safer internet practices to empower people to use Discord and other platforms in a smart and healthy manner. One of the ways we’ll be working together is by engaging with NoFiltr’s Youth Innovation Council to co-develop educational resources that connect to users where they are.
For this year’s Safer Internet Day, Discord and NoFiltr, with help from the Youth Innovation Council, are launching the “What’s Your Online Digital Role?” quiz. We believe that everyone can play a part in helping make online communities a safe and inclusive space, and this interactive quiz can help you figure out what role best suits you when it comes to being a part of and building a safe community. Take the quiz here to find out.
Discord is partnering with Childnet UK and Internet Sans Crainte, two European Safer Internet Centers dedicated to increasing awareness and education about better online safety practices for youth.
In addition, we’ll be hosting a roundtable event in Brussels where policymakers, civil society thought leaders, and industry partners will come together to share insights, discuss challenges, and discuss steps we can take together to make Discord and the internet a safer place for young people.
We also wanted to provide materials and tools that everyone can use to help facilitate healthy and open conversation about online safety — it’s not so easy approaching difficult or sensitive topics when it comes to talking about your experiences online.
To help kick start these important discussions in a more approachable way, we’ve made a print-at-home fortune teller filled with questions and icebreaker prompts that can help you lead a conversation about better digital health and safer online practices.
Interested in your own Safer Internet Day fortune teller? Check out our Safer Internet Day home page on Discord’s Safety Center, where you can print out and assemble your very own fortune teller.
We’ll be showcasing how to use this resource with our communities as part of our Safer Internet Day celebration.
We’ll be celebrating Safer Internet Day in the publicly-joinable Discord Town Hall server with an event all about Safety, including a walk-through of the fortune teller activity. You can join Discord official Town Hall server for the event happening on February 7th, 2023.
At Discord, we are deeply committed to helping communities thrive on our platform. Our teams are always working to make Discord an even more accessible and welcoming place to hang out online with your friends and build your community. Here are a couple of safety highlights from the last year:
In June, we debuted our new automated moderation tool, AutoMod, for community owners and admins looking for an easier way to keep communities and conversations welcoming and to empower them to better handle bad actors who distribute malicious language, harmful scams or links, and spam.
Today, AutoMod is hard at work in over 390,000 servers, reducing the workload for thousands of community moderators everywhere. So far, we’ve removed over 20 million unwanted messages — that’s 20 million less actions keeping moderators away from actually enjoying and participating in the servers they tend to. Check out our AutoMod blog post if you’re interested in getting AutoMod up and running in your community.
Spam is a problem on Discord, and we treat this issue with the same level as any other problem that impacts your ability to talk and hang out on Discord. That’s why this year, we’ve increased the amount of time and resources into refining our policies and operational procedures to more efficiently and accurately target bad actors. We’ve had over 140 million spam takedowns in 2022, and we’ll continue to remove spam from our platform in the future. Read more about Discord’s efforts to combat spam on our platform here.
Wanna learn more about online safety? How you can keep yourself and others safer online? We’ve gathered these resources to help give you a head start:
Our Trust and Safety Team currently reviews more than 800 reports every day for violations of our Terms of Service and Community Guidelines, in total handling more than 6,000 reports a week. Those reports vary greatly: sometimes the team may be investigating server raids and NSFW avatars; other times it’s removing deeply disturbing content like gore videos or revenge pornography. We also get reports where a person demands we ban another person for “calling them a poopyhead,” while other times someone is being doxxed or in danger of self-harm and a friend of theirs reaches out to us.
Further complicating things, we also get reports from people who use a combination of false information, edited screenshots, socially engineered situations, or mass reports in an attempt to get a person banned or server deleted. We don’t act without clear evidence, and we gather as much information as possible to make informed, evidence-based decisions that take into account the context behind anything that’s posted. We’ll talk about some of the hard decisions we face later in this blog.
In many situations, what happened is pretty obvious — a person has raided a server to post shock or gore content, they’re posting someone’s private information, or they’re directly threatening to harm someone in real life.
There are other cases where the situation is not so simple. Sometimes, parts of a conversation have been deleted, a slur is used as an act of reclamation, or someone is distributing a hack to a game on Discord that is generally used for cosmetic purposes — but could be used to cheat under certain conditions.
The Trust and Safety team seeks out context to best evaluate what’s going on even when things seem ambiguous. To illustrate the complexity of Trust and Safety’s decision making, see the three scenarios below and the accompanying considerations.
Two people report each other for bad behavior. One of them clearly started the harassment, while the other escalated it. It starts out with simple insults, but they’re not willing to block each other.
Eventually, it escalates to where one threatens to shoot the other’s dog and the other responds by making a sexual threat towards the initial person’s boyfriend.
Meanwhile, they’re doing this in a channel that has plenty of other people in it, some of whom are clearly uncomfortable with the escalation, and one of those bystanders in the server writes in too, asking us to do something about it.
Finally, not only is the owner of the server not doing anything, they’re actually egging the two people on, further escalating the situation.
Who should we take action on? Is it the person that started it? The person that escalated to a threat first? Or is it both people, even though each believes the other is at fault? Both people could’ve solved it by blocking the other — should we take any action at all?
How much do we believe each person felt threatened by the other person and thought the only reasonable thing to do was to keep engaging?
Should we also take action on the server owner in some way for egging them on instead of defusing the situation? If we do take action on the server owner, what should it be? A warning? What if one of the server members reports that the owner was privately messaging the two people in order to keep the feud going? Should we punish the server owner instead of the two people?
Someone is banned for messaging people across servers a combination of racial slurs and spam. This person contacts us to appeal their ban.
First, we inform them of the specific Community Guideline they have violated. Then, the banned person asks which specific message led to the ban. They insist they’ve done nothing wrong and never violated the Community Guidelines.
They claim they’ve been an upstanding citizen, are in twenty different Discord servers, and have a host of users that can speak on their behalf. They insist it’s a false ban based on false reporting.
Finally, they enlist some of those people to write in and tell us that the person was maliciously reported. They demand we overturn the ban immediately.
Is the banned person acting in good faith? Do they legitimately not understand how they violated our Community Guidelines?
Are they simply trying to identify the reporter? Should we provide vague information? Will the banned person continue arguing that whatever messages we have are insufficient?
How do we respond to the supporters that are writing in about this ban? How much information should they get about the situation?
Someone worriedly reaches out to us about a DM they received from another person claiming to be Discord staff. The DM is a warning that their messages are being monitored and that if they continued, the authorities will be contacted. They ask us if the message is real.
While the DM isn’t from Discord, the person pretending to be Discord staff contacts us and admits to sending those messages from an alt. The impersonator claims they lied in order to dissuade the initial person from self-harming.
When we investigate, it does appear to be true. The initial person was talking about some harmful activity and after receiving the impersonated warning, they’ve completely stopped.
Impersonating Discord staff is a violation of our Community Guidelines. Most of the time, impersonators engage in extremely harmful behavior and will receive an immediate ban.
In this case, it appears the impersonator has good intentions. Should we take action on the impersonator? Do we just warn them not to do it again? Do we just let it go?
On the other side, should we confirm with the initial person that the message was not from Discord? If we do that, does this encourage them to continue to self-harm?
All of this is a lot to consider and Discord’s Trust and Safety Team is tasked with answering questions like these hundreds of times a day, seven days a week, with each situation different and each one with real people who will be impacted by what we choose to do.
When creating new policy, we evaluate all available information on that topic to understand what the best policy is. We look at academic research on the topic, what other companies do, and what users, non-users, and experts in the field think. We consider whether something is illegal, whether something is harmful, and how scalable our operations are. We leverage all of these checks and balances to remove as much personal bias or interest as possible. We believe we have a deep-seated responsibility to be objective about what we allow or restrict on Discord.
Our decision-making process prioritizes safety from harm. We strive to create a platform that is safe and inclusive, regardless of someone’s race, ethnicity, gender, gender identity, religion, and sexual orientation. After all, everyone can play and enjoy games, and Discord should be that place where anyone can find someone else to play with.
Along with thinking about how we can prevent harm to people on Discord, we consider scale, privacy, nuance, and freedom when developing policy.
How can we scale the enforcement of this policy to our enormous user base? A policy that sounds good but isn’t enforced isn’t actually good policy. It’s important not just to talk the talk but to also walk the walk.
How can we balance our ability to investigate potential bad things on Discord while our users have and should have a right to privacy?
In the real world, Big Brother isn’t watching you inside your home during a conversation with two of your friends, even if you’re up to no good. Should we keep using that model on Discord if it means that people have the ability to chat about bad things, and that Discord may be used for bad actions? Are people okay with automated software or other humans reading their private conversations to stop potential bad actors?
If a potential harm is very hard to discern, is it Discord’s place as a platform to moderate a particular form of speech?
If something is reported frequently and we can’t conclude whether it’s definitely bad (even though users can take some action to protect themselves), should we rely on them to do so?
If someone opens up a server and promises rewards to its members, but people complain that they’re not fulfilling those assurances, should we forbid offering rewards?
With all this said, how can we make sure that good people don’t feel like we’re censoring them? Just because we don’t understand a hobby or interest, does that give us the permission to ban it from our platform?
After considering scale, privacy, nuance, and freedom, we outline all possible outcomes to the best of our ability, and try to find the solution with the best possible answers to those questions.
Eventually, after thorough discussion, research, and talking to third parties, we make our way to a policy document. To get more perspective, we circulate that document to other Discord staff members who aren’t on Trust and Safety and ask for their feedback.
Finally, we arrive at a conclusion, implement the policy, and monitor it. If we receive new information or the policy isn’t having the impact we’d hoped, we adjust our policy to make sure that it’s effective. We’re constantly listening, observing, and wanting to do better, and our policy reflects this as a living, breathing, work in progress.
On that note, we wanted to talk about a recent change we’ve made to uphold our commitment to listening to community feedback.
Over the past couple of weeks, posts have appeared inquiring about Discord’s stance on a niche area of NSFW policy, which is cub porn. A screenshot of an email we sent about a year ago in February 2018 has garnered significant commentary and criticism about our policy.
As our Community Guidelines state, the following immediately results in account deletion:
Furthermore, the following will lead to content removal and a warning (or ban depending on the severity):
One major reason this policy is in place on Discord is because there is a federal law in the United States against the sexualized images of minors, which includes cartoons and drawings. You can see this distinction in action on Google Images which does not show results for lolicon but does show results for cub porn.
Discord’s current policy is that anything human or humanoid is forbidden (including anthropomorphized characters). This includes most cub pornography.
While this is already more restrictive than what the law requires, we’ve received feedback that we’re not comprehensive enough here. As of today, we’re changing our policy to ban all cub porn. This means the sexualization of minor ban now extends to non-humanoid animals and mythological creatures as long as they appear to be underage. We’re adding “cub” to the list of categories, after lolicon and shotacon, to our Community Guidelines to clarify that this content is not allowed on Discord.
It’s really important to us that the millions of people who use Discord every day can trust our decisions. We want this blog to provide transparency on our processes, and we’re going to continue this by further providing a more in-depth view into our actions as a team with Discord’s first transparency report.
In this report, we want to provide more information about our content moderation outcomes, such as how many actions are taken a month, how many users are actioned, and what is causing their removal from the platform.
We think that transparency is good. It will shine more light on the work we do, help maintain accountability, and guide conversations about where we should be spending our time to get better. We’re looking to release our first report by the end of April, and want to continue releasing them quarterly after that.
These last couple of weeks, we received a lot of feedback about our policies and decisions. We’ve also received death threats and personal attacks directed at those who put their heart and soul into keeping you all safe on Discord.
As time goes on and Discord grows even larger, there will likely be more situations where reasonable people may disagree on the best policy to have. When this happens, we hope to engage in constructive dialogue, not personal attacks or threats.
Lastly, we’ve always taken feedback on all of our decisions, from what features to build all the way to the policies that govern what is acceptable conduct on Discord. We hope this blog shows you how our Trust and Safety team keeps Discord a safe place to bring people together around games.
We look forward to continuing this dialogue with you.
We’re committed to making Discord a safe place for teens to hang out with their friends online. While they’re doing their thing and we’re doing our part to keep them safe, sometimes it’s hard for parents to know what’s actually going on in their teens’ online lives.
Teens navigate the online world with a level of expertise that is often underestimated by the adults in their lives. For parents, it may be a hard lesson to fathom—that their teens know best. But why wouldn’t they? Every teen is their own leading expert in their life experiences (as we all are!). But growing up online, this generation is particularly adept at what it means to make new friends, find community, express their authentic selves, and test boundaries—all online.
But that doesn’t mean teens don’t need adults’ help when it comes to setting healthy digital boundaries. And it doesn’t mean parents can’t be a guide for cultivating safe, age-appropriate spaces. It’s about finding the right balance between giving teens agency while creating the right moments to check in with them.
One of the best ways to do that is to encourage more regular, open and honest conversations with your teen about staying safe online. Here at Discord, we’ve developed tools to help that process, like our Family Center: an opt-in program that makes it easy for parents and guardians to be informed about their teen’s Discord activity while respecting their autonomy.
Here are a few more ways to kick off online safety discussions.
If a teen feels like they could get in trouble for something, they won’t be honest with you. So go into these conversations from a place of curiosity first, rather than judgment.
Here are a few conversation-starters:
Teens will be less likely to share if they feel like parents just don’t get it, so asking open questions like these will foster more conversation. Questions rooted in blame can also backfire: the teen may not be as forthcoming because they feel like the adult is already gearing up to punish them.
Read more helpful prompts for talking with your teen about online safety in our Discord Safety Center.
Our goal at Discord is to make it a place where teens can talk and hang out with their friends in a safe, fun way. It’s a place where teens have power and agency, where they get to feel like they own something.
Just because your teen is having fun online doesn't mean you have to give up your parental role. Parents and trusted adults in a teen’s life are here to coach and guide them, enabling them to explore themselves and find out who they are—while giving them the parameters by which to do so.
On Discord, some of those boundaries could include:
Using Discord’s Family Center feature so you can be more informed and involved in your teens’ online life without prying.
At Discord, we’ve created several tools to help parents stay informed and in touch with their teens online, including this Parent’s Guide to Discord and the Family Center.
In the spirit of meeting teens where they are, we’ve also introduced a lighthearted way to spur conversations through a set of digital safety tarot cards. Popular with Gen Z, tarot cards are a fun way for teens to self-reflect and find meaning in a world that can feel out of control.
The messages shown in the cards encourage teens to be kind and to use their intuition and trust their instincts. They remind teens to fire up their empathy, while also reminding them it’s OK to block those who bring you down.
And no, these cards will (unfortunately) not tell you your future! But they’re a fun way to initiate discussions about online safety and establish a neutral, welcoming space for your teen to share their concerns. They encourage teens to share real-life experiences and stories of online encounters, both positive and negative. The idea is to get young people talking, and parents listening.
Sometimes, even as adults, it's easy to get in over your head online. Through our research with parents and teens, we found that while 30% of parents said their Gen Zer’s emotional and mental health had taken a turn for the worse in the past few years, 55% of Gen Z said it. And while some teens acknowledged that being extremely online can contribute to that, more reported that online communications platforms, including social media, play a positive role in their life through providing meaningful community connection. Understanding healthy digital boundaries and how they can impact mental wellbeing is important, no matter if you’re a teen, parent, or any age in-between.
When it comes to addressing the unique safety needs of each individual, there are resources, such as Crisis Text Line. Trained volunteer Crisis Counselors are available to support anyone in their time of need 24/7. Just text DISCORD to 741741 to receive free and confidential mental health support in English and Spanish.
Because investigations are ongoing, we can only share limited details. What we can say is that the alleged documents were initially shared in a small, invite-only server on Discord. The original server has been deleted, but the materials have since appeared in several additional servers.
Our Terms of Service expressly prohibit using Discord for illegal or criminal purposes. This includes the sharing of documents that may be verifiably classified. Unlike the overwhelming majority of the people who find community on Discord, this group departed from the norms, broke our rules, and did so in apparent violation of the law.
Our mission is for Discord to be the place to hang out with friends online. With that mission comes a responsibility to make Discord a safe and positive place for our users. For example, our policies clearly outline that hate speech, threats and violent extremism have no place on our platform. When Discord’s Trust and Safety team learns of content that violates our rules, we act quickly to remove it. In this instance, we have banned users involved with the original distribution of the materials, deleted content deemed to be against our Terms, and issued warnings to users who continue to share the materials in question.
This recent incident fundamentally represents a misuse of our platform and a violation of our platform rules. Our Terms of Service and Community Guidelines provide the universal rules for what is acceptable activity and content on Discord. When we become aware of violations to our policies, we take action.
The core of our mission is to give everyone the power to find and create belonging in their lives. Creating a safe environment on Discord is essential to achieve this, and is one of the ways we prevent misuse of our platform. Safety is at the core of everything we do and a primary area of investment as a business:
The fight against bad actors on communications platforms is unlikely to end soon, and our approach to safety is guided by the following principles:
Underpinning all of this are two important considerations: our overall approach towards content moderation and our investments in technology solutions to keep our users safe.
We currently employ three levers to moderate user content on Discord, while mindful of user privacy:
There is constant innovation taking place within and beyond Discord to improve how companies can effectively scale and deliver content moderation. In the future, our approach will continue to evolve, as we are constantly finding new ways to do better for our users.
We believe that in the long term, machine learning will be an essential component of safety solutions. In 2021, we acquired Sentropy, a leader in AI-powered moderation systems, to advance our work in this domain. We will continue to balance technology with the judgment and contextual assessment of highly trained employees, as well as continuing to maintain our strong stance on user privacy.
Here is an overview of some of our key investments in technology:
In the field of online safety, we are inspired by the spirit of cooperation across companies and civil society groups. We are proud to engage and learn from a wide range of companies and organizations including:
This cooperation extends to our work with law enforcement agencies. When appropriate, Discord complies with information requests from law enforcement agencies while respecting the privacy and rights of our users. Discord also may disclose information to authorities in emergency situations when we possess a good faith belief that there is imminent risk of serious physical injury or death. You can read more about how Discord works with law enforcement here.
If you would like to learn more about our approach to Safety, we welcome you to visit the links below.
Not sure how to approach difficult or sensitive topics when it comes to talking about your experiences online? Check out our print-at-home fortune teller filled with questions and icebreaker prompts that can help jump start a conversation about better digital health and safer online practices.
Everyone’s got a role to play in helping make your online communities a safe and inclusive space— wanna find out yours?
For this year’s Safer Internet Day, Discord and NoFiltr, with help from the Youth Innovation Council, are launching the “What’s Your Online Digital Role?” quiz. We believe that everyone can play a part in helping make online communities a safe and inclusive space, and this interactive quiz can help you figure out what role best suits you when it comes to being a part of and building a safe community. Take the quiz here to find out.
We’re committed now more than ever to helping spread the message of Safer Internet Day. In continuing our mission of making your online home a safer one, Discord is partnering with Childnet UK and Internet Sans Crainte, two European Safer Internet Centers dedicated to increasing awareness and education about better online safety practices for youth.
In addition, we’ll be hosting a round table event in Brussels where policymakers, civil society thought leaders, and industry partners will come together to share insights, discuss challenges, and discuss steps we can take together to make Discord and the internet a safer place for young people.
Wanna learn more about online safety how you can keep yourself and others safer online? We’ve gathered these resources to help give you a headstart:
We want you to feel equipped with the tools and resources to talk to your teens about online safety and understand what safety settings Discord already offers from trusted child safety partners. Check out our current and updated Safety Center’s Parent & Educator Resources:
We’re also partnering with child and family safety organizations to build more resources and improve our policies for the long-term. One of our close partners, ConnectSafely recently launched their Parent’s Guide to Discord.
Larry Magid, CEO of ConnectSafely states: "Parents are rightfully concerned about any media their kids are using, and the best way to make sure they're using it safely is to understand how it works and what kids can do to maximize their privacy, security, and safety. That's why ConnectSafely collaborated with Discord to offer a guide that walks parents through the basics and helps equip them to talk with their kids about the service and how they can use it safely."
We’re now making it easier for students to find and spend time with their classmates through Student Hubs. Each Hub features student-run communities around their school – after school clubs, study groups, hangouts, classes, and more. Hubs are only accessible to users with an email address associated with that school. Servers within the hub are created by students who determine who has access to their server. Communities held within Student Hubs aren’t officially affiliated with schools themselves, nor are they intended for official school use.
We built Student Hubs with additional safety guidelines around cheating, bullying, or other toxic behaviors to ensure your students have a great, safe experience. For more information you can read through our Student Hubs FAQs for parents and educators.
We have a lot planned this school year to improve on our existing safety policies, settings, and experiences for teens. We will engage more directly with parents, educators, and child safety organizations, and we’re currently hiring for a Teen Safety Policy Manager to lead this charge. We’ll have other product updates and partnership announcements to come!
Not sure how to approach difficult or sensitive topics when it comes to talking about your experiences online? Check out our print-at-home fortune teller filled with questions and icebreaker prompts that can help jump start a conversation about better digital health and safer online practice
On October 16th, 2018, we added an arbitration and class action waiver clause. To summarize, this clause states that when there’s a disagreement between Discord and you, we’ll use an alternative dispute resolution process known as arbitration.
The current legal landscape in the United States is such that class action lawsuits can be abused. This clause was added because we are now operating a game store and subscription service for profit. Like many other companies, we are now a target for entities who wish to abuse the class action lawsuit.
If you’d like to learn more about the legal landscape, arbitration, and the potential abuse of class action lawsuits, we’ve included a handful of links in the Resources section at the bottom of this post.
We think that people can disagree on whether or not arbitration is good in general or whether or not the class action system is one that is beneficial; we don’t think it’s completely black and white. In fact, we’re very interested in how Europe is approaching implementing class action lawsuits, as their approach may mitigate the issues that the United States faces with them.
Because we don’t think it’s black and white, one of the things that we’ve implemented (which some of our competitors do not), is to allow you to opt out of this clause completely. We encourage you to opt out if you wish. You won’t be penalized in any way if you do so. At no point will we ever gate any features or take any action on users because they’ve opted out of arbitration.
The following changes will be live sometime before October 20th, 2018.
As we stated above in “Why did we change it,” our motivation for this change is because of the legal climate in the United States. To protect our users outside the United States, we’ve decided to modify this clause so that it only affects users in the United States. If you are outside of the United States, this clause does not apply to you. This means users outside the United States do not need to opt out if they were wanting to.
Furthermore, every user was (or will be if they haven’t logged in yet) notified of changes to our Terms of Service on the date they were updated by a blue notification bar at the top of the Discord client. In hindsight, we should have provided notice of these changes much further in advance, so we apologize for that.
That said, much of the feedback we’ve received is that our community was not aware of these changes. To provide more opportunity for those who wish to opt out and for those who may have overlooked the notification bar, we’ve extended our opt out period from the initial 30 days to 90 days.
Stated in our Terms of Service under DISPUTE RESOLUTION:
You have the right to opt out and not be bound by the provisions requiring arbitration by sending written notice of your decision to opt out to Discord by email to arbitration-opt-out@discord.com.
Your opt out doesn’t need any specific template or form. It just needs the request to opt out and it must come from the email associated with your Discord account.
We want to reiterate — you will not be penalized in any way for opting out of arbitration. Again, we encourage you to do so if you wish to.
Thanks again for your feedback — it’s important to us as a company that we hear from our community. We really appreciate that you all care so much to voice your opinions. If you have any further questions, please send an email to privacy@discordapp.com and we’ll be happy to answer!
Protecting the privacy and safety of our users is of utmost importance to Discord.
Discord is a communications service for teens and adults who are looking to talk with their communities and friends online. We do not allow those under the age of 13 on our service, and we encourage our users to report accounts that may belong to underage individuals.
We’re pleased to share that the Better Business Bureau’s Children’s Advertising Review Unit (“CARU”) issued a report endorsing our practices relating to children’s privacy. CARU regularly monitors websites and online services for compliance with laws and best practices relating to children’s privacy and engaged with Discord as part of its routine monitoring work.
CARU finished its review of Discord in October 2020 and issued the following statement in connection with the release of its report:
“[T]he outcome we hope for is proactive corporate accountability on children’s privacy, and that is exactly what Discord delivered.”
— Dona Fraser, Senior Vice President Privacy Initiatives, and Director of the Children’s Advertising Review Unit (CARU).
Discord appreciates CARU's thorough evaluation of our service and our practices. We look forward to continuing our work to improve online privacy and safety.