Confidentiality and discretion from your moderation team plays an important role in building and maintaining trust between the users and your staff. While transparency is important, your moderation team must carefully weigh every detail of what is learned through not only moderation but also the management of a community in order to assess what is and is not appropriate to share publicly. While it may be challenging to properly discern what information to omit or what can be shared, privacy is paramount to Discord and should be for every element of a community to uphold the trust that each member holds.
Whichever moderation roles a server may have, there should always be an authority role that can make calls at their discretion if they believe it is the best thing for the community. A good example on how to do just that can be found here. Moderation administrators, leaders, managers, etc. should always be prepared and ready to make judgment calls on the information provided to them, whether by mods or users. A very common misconception among moderation teams is that they should share all information amongst the team for transparency. This can be a double-edged sword in the sense that disclosing private information that is not essential for a moderator can open more routes for that information to have unauthorized distribution. If this occurs, it will compromise the privacy and trust of the users that the information applies to. In sensitive situations containing very volatile information, consider if it may be beneficial to have it handled directly by a team leader or even the owner of the community.
Personally identifiable information (or PII) is any information that can identify a user, such as an email address, full name, phone number, IP address, exact location, or even their Discord user ID and username.
People should never disclose someone’s personal information except their own in an appropriate environment, as disclosing others’ info can be treated as doxxing, which is a disclosure of personal info by a third party (for example, someone posting another user’s address), and can, in some instances, be actioned on by Trust and Safety as it may violate Discord’s Terms of Service/Community Guidelines. User IDs and usernames are acceptable as long as there is a justifiable need to disclose it, but make sure to always consider if there may be repercussions to that user if disclosed in any instance.
PII is very sensitive as it removes a user’s privacy and can result in them being targeted online or even in real life. Thus, this information should always be protected with the utmost discretion. Moderators may come in contact with this in ways such as a message they have to delete, someone maliciously doxxing another person, a user accidentally sharing it without realizing the harm they are putting themselves in or even from information included in a report. This information typically should not be disclosed to anyone and community leaders should consider removing it from bot logging channels to protect a user’s identity.
Also consider encouraging members of your community to learn how to safeguard their own information. You can include rules within your communities that discourage the sharing of even one’s own personal information. As important as it is to protect other users, it is just as important to help them protect themselves. Users may sometimes share their information out of good will or as a way of attempting to bond with others, but bad actors can use that information maliciously.
Personal matters can refer to a huge range of information, but some common examples can include relationships, interpersonal conflicts, previous history, or things as simple as a DM or private conversation. As a moderator you may very likely come across information involving this as part of reports, concerns, or even someone breaching trust by screenshotting and sharing private messages. This information is extremely important to protect as people may trust you to keep it private and use it only to take care of the issue at hand. Exposures of this information can be very harmful to people and can result in targeted harassment, bullying, or even further negative consequences. Stories of this can cause people to be concerned and even worried about reporting something for fear of it happening to them. In the end, this makes things very difficult for moderators to not only reassure, but to rectify.
Most public communities have ways of protecting their server with moderation tools, actions, and procedures. This includes moderator actions such as warnings, kicks, mutes, bans, etc. Moderation actions may be especially important when it involves a specific user. Moderation info can even include internal details such as protocol, procedure, censor lists, or even bot details.
Moderation information is something that can vary from server to server, and thus it is relatively up to the discretion of each moderation team to instill their own server rules to enforce. Some may have full transparency with an open log channel, and some may take a more confidential approach and only speak with those involved. Both have their pros and cons, but be sure to weigh what could happen if people know who receives what penalties. For protocol, always remember to carefully decide what to share publicly, as disclosing a procedure can lead to someone using that information to evade moderators or even exploit the server. This also stands true with bots, as disclosing bot details such as configuration or censor list can result in users evading the protections put in place by your team.
There are many different forms of information that must be considered heavily before disclosing to different people, whether they be users or other mods. Information can range from sensitive personal information such as emails, names, phone numbers, location, IP address, etc. to community-related information such as mod actions, previous incidents, and user history. Regarding users, very little should be shared to people who are not involved. When it comes to fellow mods, it is always best to share as much information as is reasonable aside from personal information to ensure everyone has a well-informed mindset.
Some questions to consider when speaking with users include:
Now for mods and members of the more internal team on servers, mods should of course be “in the loop” to know the story of a situation, and it’s never recommended to keep mod teams in the dark. That being said, even with other moderators, be careful about sharing unnecessary information, especially personally-identifying information, not only because there is often little benefit to it, but primarily because it compromises a user’s privacy even if behind closed doors. While there are fewer factors to consider, they are still just as important as the ones you would ask for another user.
Some things to consider when disclosing to moderators include:
Remember that if you aren’t sure if you should disclose something related to moderation, always ask an administrator/leader on your server for guidance, and always dispose of private information if it is not needed.
It may be easier to be fully transparent and not have to check every sentence before it is said or sent. That being said, there are many benefits to upholding a consistent, confidential environment where staff act with discretion when assisting with a variety of matters. There are many consequences if confidentiality is not upheld properly. Below are some examples of the benefits of protecting information as well as the consequences that can come with being overly transparent.
Keeping Pseudonymous. As stated by Discord’s Safety Principles, Discord is pseudonymous, which means that your account on Discord doesn’t need to be tied back to your identity. Protecting users who may provide information as evidence or otherwise may sometimes expose who they are, and protecting this information reassures that their personal life won’t be compromised by socializing or confiding in a server’s staff.
Trust. Users will know of and hold high trust within a staff team if they are confident that high expectations of privacy will be respected by the team they confide in. If not upheld, users will find it difficult to trust the team, and may heavily contemplate or even refrain contacting a moderation team again in the future.
User Safety. Diligent protection of user data and information helps protect users as it prevents unwanted data from getting into the wrong hands. If information is not guarded, information that gets into the wrong hands can result in targeted harassment or bullying, as many private details can reveal information to malicious individuals.
Moderator Safety. Keeping moderation actions confidential and only disclosing information to people who need to know helps to keep moderator anonymity and reinforces the idea of a team decision. Disclosing moderation actions and who performed them can put a target on the mod, as people may treat them personally responsible for an action and may result in harassment or disrespect from users who may not understand the decision.
Personally identifiable information being shared outside of need to know groups can result in compromising users and making them feel as if they may need to sacrifice their Discord to retain personal privacy. This leads to a loss of trust from the member, and perhaps even the loss of them as a member of your community.
There are multiple things to be mindful of when considering privacy and confidentiality, and it extends well beyond standard moderation. Often, privacy will fall down to the way that the server is configured. Some things to consider include:
Server Discoverability. If an LGBTQ+ server is in Server Discovery, a user may use an emote from that server in another one, and if someone clicks on the emote, it may accidentally expose the user as they may identify as LGBTQ+ privately but not publicly.
Public Join Messages. Some servers may have “welcome bots” or even Discord’s welcome feature that greets new users publicly upon joining. Server staff should take into account the type of community that they stand for, and consider if users may perhaps feel uncomfortable or exposed by being mentioned immediately upon joining.
Security. Automated security and “gatekeeper bots” may be used to prevent malicious users from joining a server on alt accounts or as part of malicious groups. While this seems perfectly normal, the part that has to be considered is what data you are requesting. Some of these bots may collect IP addresses, browser data, and various other forms of information. Users may not be comfortable in supplying information that could compromise who they are. Always make sure to read through the privacy statement of any bot that you add to ensure that you are not asking for too much information from regular members.
Bot Logging. Many servers have private log channels maintained by one or more bots. This tracks joins, leaves, deleted or edited messages, and even more. There are two main points to be wary of with these: if personal information is posted for any reason, be it accidentally by misclick or maliciously to dox a user, it will usually appear in a moderator logging channel when deleted. After the situation has been dealt with, owners or admins should consider deleting the log message to prevent personal information from persisting within that channel.
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
There are pros and cons to any level of disclosure that is offered by a server to its community and its staff. It is not black and white and there are gray areas in both transparency and revealing select information with moderator discretion. There must always be a balance of both that may shift depending on the situation at hand and the type of community that is present. Just as complete confidentiality will lead to distrust, total transparency will lead to users feeling unprotected due to a lack of privacy.
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.
Take the Discord Moderator Exam!Take the Exam