There are millions of communities on Discord with varying interests and diversities. As such, there might be a desire for some servers to be able to integrate adult content as part of their community discussions. This would be the kind of content that is suitable for some people based on age restriction yet unsuitable for others. Channels for this content can provide an important space for adults in your server to discuss issues related to topics such as sexual health, safe sex, their relationships with their bodies, or a space to share and explore adult content. Adult content in any medium cannot be shared on Discord outside of channels that have been marked with the NSFW toggle. Maintaining such a space in any community does require a significant amount of oversight, effort, and proactive zeal from any given moderation team. As such, keep in mind that choosing to create this space is purely optional and most communities are free to decide whether having such a space is suitable for the culture and if it fits within the context of their community. This article will cover what falls under the umbrella of adult content, what it means, when and where it is allowed, how to maintain a space for it, and how to successfully set up and moderate such a space in compliance with Discord’s Terms of Service and Community Guidelines.
Adult content is anything that would be unsuitable for those under the age of 18 to view. This is synonymous with the term “NSFW” for the purposes of this article. NSFW is an acronym for the statement “not safe for work”, which is used as a shorthand to clearly indicate to others that a certain type of content may not be appropriate to look at in public, professional or controlled environments.
Please feel free to browse more on how Discord tackles safety on its platform in this section of the website. Check out our Safety Portal, particularly in the Parents and Educators section for further guidance.
The first step to setting up an adult content channel is to determine what method of age gating you need and how you want to set it up. People who should not have access to the content will try to get in and what steps you take to keep them out is up to you. Please note, since things like server icons, invite splashes, server banners, user profile pictures, usernames, nicknames, and custom statuses cannot be age gated, they should not contain any adult content. Emojis containing adult content should only be hosted and posted in places that are age gated.
Discord only requires that you use the NSFW toggle, but depending on your server and the nature of the content shared, you may want to take a more active approach to ensuring the content is only accessed by adults.
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Verification Actions
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
The NSFW toggle must be turned on for any channels with NSFW content. Even if your server is exclusively 18+ and requires users to send a picture of their photo ID to join, the channel still needs this toggled on. In addition to keeping minors out of spaces with adult content, this toggle will also flag the channel as NSFW so that adult users can avoid it if they do not wish to see that content. Not marking NSFW channels appropriately opens the risk of action being taken on the server from Discord’s Trust and Safety Team.
Discord asks all users to submit their birthday upon account creation and has been asking users whose accounts were made prior to its rollout to provide their birthday upon attempting to open an NSFW channel. This will prevent users who have told Discord that they are under the age of 18 from seeing any content in the channel, they will instead be met with a page telling them that they are not old enough to view the channel.
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
If you run a server with NSFW content, you may want to consider preventing users from just joining and immediately opening NSFW channels. Whether you want to do this or not depends on you and what your channels are for.
If your main concern is to do your due diligence to abide by the Terms of Service and Community Guidelines for a server with some image only channels, you may not want to gate your server entirely. If your main concern is keeping your younger members safe from engaging in inappropriate discussions and your adult members safe from unknowingly interacting with minors in an inappropriate way, then you might want to set up additional levels of security to keep out any minors who may have given the incorrect age when prompted by Discord’s age gate.
This can be achieved by asking users how old they are as part of your onboarding process. Perhaps a user must supply their age in their introduction or through picking a role from a bot. Users who aren’t aware that there is NSFW content in your server that they may later want access to are less likely to pretend to be over 18 immediately upon joining.
It should be noted that this is less effective if your server is named something that makes it obvious that the server contains 18+ content because users who join will likely know that they will need to lie to access it.
If your server has certain channels designated for adult content and the server has minors in it that are there to access age appropriate content that’s also hosted in the server, it may be worth role locking the adult content channels.
This can be implemented by making it so that @everyone, a role all members have, or an under 18 role, is unable to see the channel. The main ways to do this would be to set the default permissions for the channel to neutral and set another role to deny access, or set the default user permissions to deny access and the over 18 role to grant access. This also allows you to later remove access to adult content from any users that are causing problems in the channel by simply removing the role that grants them access.
There are various ways that role permissions can be set up to prevent access to a channel. This is only an example, and is not the only option, and may not be the best option to go along with your overall server permissions setup.
Channel role gating can also allow you an extra opportunity to read the rules for or regarding the adult content channels. This can be accomplished by asking users to react to a message within the rules, dm a secret phrase in the rules to a bot that assigns a role, ask a staff member directly for the role and have that staff member ask the user to confirm they’ve read the rules before assigning it, or a number of other systems.
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text Filters
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Anti-Raid
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
User Filters
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
When users open the channel in question, they will see the channel name and see the popup in the screenshot below. Most users will probably not read any channel rules or channel description beyond this. It is important to clearly state what the channel is for in the channel name to prevent users from just assuming it’s a general media channel based on Discord’s popup and the channel name.
For example, if you have a channel that is used for text-based advice about an adult topic and the channel name is vague and has image permissions enabled, users may assume that the channel is for posting content and not for seeking advice/discussing a heavy topic. To make things clearer, try to match the channel name and permissions to the purpose and context of the channel. In this case, changing the channel name to #-advice and disabling image permissions may help users to better understand what the channel is intended to be used for. You can also set a description for the channel where you can more clearly state the purpose, but keep in mind that channel descriptions are less visible to users and may not be seen by everyone entering the channel.
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.
Channels focused on adult topics can provide users with a comfortable space to discuss personal issues and build closeness and trust between members of your community, or just be a space to blow off steam and share content they enjoy. These channels also have very specific risks and required mitigation strategies that will vary depending on the nature of the specific channel.
If you are running a channel on safe sex advice, your main concern will likely be the spread of misinformation and it will be of paramount importance to have reliable and accurate resources on hand. If you run a channel for sharing images, your main concern will likely be making sure that the images shared are legal and properly categorized into your channel. You have to consider what the specific risks in your channel are and ensure that you are writing policies that are specific to your needs and finding moderators that are knowledgeable and comfortable with those topics.