A healthy community on Discord will continue to grow and attract new members that are interested in the purpose of your community. With growth in membership comes the need to evolve your server to meet your members’ needs. One area that may emerge as your server grows is the need to serve your community’s content creators--users who are creating any form of media relating to the audience of your server. Moderating content related areas will present unique challenges that may not be found in a larger server meant for a game, tv show, or Reddit. Despite these unique challenges, this aspect of fandom still deserves to be celebrated and welcomed! This article will explore what to consider when creating a home for content creators within your server.
Content creation is one of the coolest aspects of a community! Even those that do not create themselves can celebrate the passion and excitement that comes with sharing art. Artists shouldn’t be regulated to just a generalized #media channel where all users are posting photos, and you should consider instead giving them their own designated area in the server. This shows these users that moderators see their contributions to the community and appreciate what they are doing. This area can be a channel dedicated to sharing art and content, or even an entire channel category depending on how your moderation team wishes to interact with your community’s creators and how active this part of your community may be. Listen to their needs and expand and modify this category as necessary.
When building out a content creation realm in a server, it is important to keep in mind that your moderation team may encounter some new situations that don't apply to the rest of the server. Of course, content creators are subject to the same laws of the land in place for the entire community, but there may need to be some of these unique rules of the road to consider including:
Plagiarism. This is the practice of taking someone else’s work and claiming that it is your own. Plagiarizing another content creator should not be tolerated within any creative space. It should be highly discouraged and acted upon with moderator intervention if your community brings an accusation of plagiarism to your moderation team. As moderators, it is important to understand the difference between plagiarism and finding inspiration in someone else’s work. Tracing another creators’ artwork is the most common form of plagiarism, whereas being inspired by an original character to try out a new pose, color scheme, or scene featuring them is inspiration. While creators are often looking out for each other and willing to bring concerns about plagiarism to moderation teams, it is important to be able to look for it yourself by familiarizing yourself with your artists’ styles and reverse image searching images of concern to your team. Be sure to be able to explain to your community why plagiarizing is harmful when these situations arise.
Managing Constructive Criticism vs. Hate. Your content creation channels are going to be accessible to your entire server. This is so that the entire fandom can celebrate together, but also to drive interest from users to support your content creators. This means that the average user can come in and comment on content. There is a line between constructive criticism and hate. Watch out for it as moderators and be prepared to intervene should anything cross the line into attacks or hate-filled commentary that would give content creation an unwelcoming atmosphere. Oftentimes in creative communities it is an unspoken rule that you should not give constructive criticism unless it is specifically asked for. The average user may not realize this and could accidentally offend an artist if they’re not aware of this. As a moderator, it’s important to help artists understand constructive criticism when they ask for it while shielding them from trolls or baseless hate. Sharing content can be intimidating, so it is especially important to ensure that content creation channels remain positive and respectful environments. One way you can mitigate this issue is by making it clear in your rules that unless the artist specifically asks for constructive criticism, that feedback of that nature is not allowed.
Bumping. Art bumping may occur in an art channel where artists feel their content isn’t easily viewed by enough people. This is essentially the act of media getting bumped up in chat from other people sharing their media at the same time or from chatter about other works. An accusation of bumping usually comes up when a creator feels their art isn’t being noticed, or if they believe someone they do not have a good relationship with is intentionally bumping their work. In this case, it’s important to defuse the situation and not allow any forms of bullying by de-escalating the conflict. Maintaining an environment where users respect everyone's work is necessary for the peace of mind of creators and consumers alike. You can also consider building out a channel category instead of a single channel which would allow for a channel dedicated to posting art and a separate one for discussion. You may contemplate a rule of not posting art within a certain time frame of another creator posting, but be cautioned that this can lead to over-moderation by your community.
NSFW content. If your server allows Not Safe for Work content, it is important that you create a specific channel for it that can be marked as an NSFW channel separate from your regular content creation channels. In line with Discord’s policies, this will not allow users under the age of 18 to see this channel without agreeing to a prompt that says they are not a minor. It is also important to consider that the implementation of an NSFW channel disqualifies you from being a Partnered or Verified Discord server. Make sure to keep the expectations around SFW and NSFW content creation in line with that of your entire server, and offer to answer any questions in DMs if a creator thinks a piece may toe the boundaries you enforce.
Advertising Commissions. If you have a blanket ban on advertisement in your community, you may not want to make an exception to the rule here. However, if you decide to allow advertising commissions in your server, you are allowing more commissions to flow to your creators. Do not allow other users to beg for free art or try to guilt creators with open commissions into providing free content to them. It may be the case that your moderation team will have to enforce boundaries if someone who commissions a creator within your community doesn’t pay them or revokes payment. Conversely, if a creator requires payment up front and then does not deliver work and doesn’t refund the commissioner, moderator intervention should occur to no longer allow them to accept commissions from other members.
To be clear, you are not responsible for their financial disputes or business transactions. Ultimately, creators should look into their specific payment provider website for policy information on fraud and filing disputes, both of which are out of your control. Your job as a moderation team is protecting creators from scammers who make themselves known within your community. You’ve created this space to cater to creators and need them to know that users who take advantage of them and creators who take advantage of users are not welcome here.
Low Quality/Low Effort Art. Something your moderation team should consider is whether or not you will be moderating low quality or low effort art. Lower quality art has the ability to potentially create a divide with more experienced artists or diminish the overall quality of your artistic channels. Expectedly, this is a very subjective and divisive topic. Moderating “low quality” or “low effort” art can run the risk of upsetting younger users or creators that are at the very beginning of learning how to create. When considering moderating low quality art, be sure to display empathy and compassion to avoid coming off as inconsiderate or rude. Be honest and realistic in your descriptions and requirements for these art spaces so that users may have a better idea as to what is and isn’t acceptable both content and quality wise. Other ways to healthily promote higher quality artists is via potential role systems, automatic pins, or weekly artist highlights, which will be discussed in further detail below.
Off Topic Art. As a team, think about whether you want your artist channels to be dedicated to the purpose of your server or if you want to also allow off topic content. Once this rule is decided, check that your moderation team is on the same page for enforcement and nudging should you decide not to allow off topic art.
There are several ways to keep your community’s content creators engaged, which helps to showcase how much your moderation team values their contributions to the server. Discord has several native features that can showcase your community’s talent in emojis, stickers, banners, and server icons. While a banner and a server icon are important to be branded and thus rarely changed, generating emojis and stickers (especially from within your community) is a good way to bond, celebrate inside jokes with your community, and show some love for your creators. Oftentimes communities will employ certain yearly opportunities like emoji elections where creators can submit emojis for consideration and allow your community to vote as a whole.
Continued engagement with your content creators is also important. If you are engaging your community with generalized game or server events, examine whether you can engage your content creators in the same way with art events or monthly prompts as this promotes community bonding. If your community has a system to reward winners for their work or participation in events, work to instill the same kind of system for art adjacent events or prompts.
Finally, some communities may want to install a special role for content creators, especially those that are active and constantly contributing quality work. This will showcase artists in the server to the rest of the server. Do keep in mind however, that unique role colors can lead to inadvertent exclusivity and a social hierarchy within the server. This can also have the effect of alienating artists that do not yet have the role, which is why you should be careful when thinking about if you want to introduce this role to your community. If you decide to bring a specialized role into your server, take the step to have clear criteria for users to qualify for it as well as easy rules for moderators to grant it to users. This ensures that your moderation team can avoid accidentally leaving someone out and hurting someone’s feelings. Avoid bringing a role into your server if your server has had problems with role related hierarchies in the past. Listen to your community’s needs and anticipate potential problems!
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Verification Actions
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
Content creators are an exciting subset of fandom that should be welcomed to your community! Ensure that they have their own area to share all forms of content in, whether it be a channel or an entire channel category. Be aware that content creation arenas often come with unique rule considerations that you may not have encountered previously in the daily moderation of your server. Talk to your moderation team about everything before launching this channel or category so that you are all on the same wavelength with enforcement before jumping in. Continuously engage your creators and involve them in the artistic aspects of the server, such as emoji and sticker creation. Art can bring people together, and having a healthy artistic space within your community will provide a new way for your community to bond and celebrate your fandom!
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text Filters
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Anti-Raid
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
User Filters
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.