Discord is a place for all kinds of connections to be made and relationships to form and as a moderator one of your primary responsibilities is managing the relationships of others to ensure that you are promoting a healthy, productive, and inclusive community. But what happens when the interpersonal relationships that you as a moderator have built start to cause problems in the community you moderate? Understanding how to manage your own interpersonal relationships within the communities you moderate is the key to preventing major administrative problems and is a crucial skill for a prospective moderator. This article will explain the dangers of interpersonal relationships gone awry and offer precautions to take when forming close relationships with members of communities you moderate.
Any relationship between two members of a community can be described as an interpersonal relationship. These relationships exist on a wide spectrum. As you participate in a community, you are most likely going to develop connections to varying degrees with other members of the community. As a moderator, this may even be expected as part of your duties to promote community engagement and healthy conversations. That’s perfectly normal, as it’s very natural for people who spend a lot of time communicating to develop closer ties to one another.
Every kind of relationship, from mere acquaintances to romantic partners, can occur in a Discord community, and every relationship you form as a moderator will carry its own unique challenges and responsibilities in order to ensure you are performing your duties to the best of your ability. Any kind of interpersonal relationship can create difficulty in moderation, but as the nature of the relationship changes, so too does the unconscious bias you may experience.
A friendship between a moderator and a member of the community is the least problematic type of intersocial relationship, but as these friendships form it is still important to take notice and be aware of them. As a moderator it is your duty to be available to everyone in the community, even people who you may not ever see as a friend, so you must resist the temptation to devote more time and attention to the people you more easily connect with. If your biases toward your friends begin to show up in your moderation efforts, many more serious and harder to diagnose problems can arise. Feelings of "elitism" or “favoritism” can start to take hold and disgruntled members may take advantage of your friendships to excuse or justify their own behavior, so take care to make sure that you are remaining impartial.
A friendship between a moderator and a member of the community that persists for a long period can evolve into a closer and more open relationship. These relationships are built on trust or shared experience, and can be more difficult to impartially manage than regular friendships or acquaintances. This kind of relationship could come from the fact that this person is someone you know from another server, in real life, or possibly even a family member. No matter what the scenario, the closeness of this kind of relationship makes it very difficult, sometimes impossible, to remove your own partiality from the equation. Special care must be taken to ensure you engage and listen to other moderators on your team when someone you are closely involved with is in question. When in doubt, it may be best to remove yourself from the situation entirely, which we will discuss in more detail later in the article.
A romantic relationship between a moderator and a member of the community can (and does!) happen. As is natural, if you meet someone who shares common interests and has an attractive personality, over time your relationship may progress into something more profound. Romantic relationships are certainly the most difficult to manage as a moderator. The saying holds true, especially in new romantic relationships, that you will see your significant other through “rose-tinted glasses” which tend to blind you from their potential flaws or wrongdoings.
Additionally, other members can very quickly see a budding relationship as an opportunity for a fellow member to grab power through the moderator they are romantically involved with. As a best practice, you should remove yourself from any moderation decisions involving a user that you are in a romantic relationship with. Failing to do so can and has directly caused the death of some communities, especially when the romantic partners are both on the same moderator team.
This type of one-sided interpersonal relationship is rare among moderators because of the connection a moderation team usually has to the content creator or personality that they moderate for. More commonly, a user witnessing a friendly moderator carrying out their daily duties to interact with their server can develop such a relationship. However, this type of relationship requires an extra level of care and awareness, as they can quickly become toxic if not managed appropriately. Always be aware of them, and consider their existence when making certain moderation decisions. The DMA has an article exclusively dedicated to parasocial relationships for further reading.
One thing to keep in mind when evaluating your relationships in your communities- regardless of the nature of them– is that your relationships and connections if played out in the server are most likely visible to other members of the community. When interacting with your friends, close friends, or even your partner in a space with other people such as your server, members of the community may pick up on the fact that you do have these relationships. As with any kind of community, there may be feelings of exclusion or the perception of “in-groups” that can arise in, especially when it comes to relationships between a “regular” server member and a highly public and visible one like a moderator. A responsibility you have as a moderator is to take this dynamic into account and the effects it can have on your members and how they view you and your friendships. Making sure that your friendships and relationships are not creating an exclusionary atmosphere for other community members, where they feel like it’s unwanted or difficult for them to contribute.
On the subject of “visibility”, a moderator – whether they are consistently conscious of it or not - is someone in the server who has power over other users in that space. It is not always the easiest task balancing the dynamic between being part of a community and cultivating relationships and friendships with being conscious of your role within that community as a moderator and what that imbalance may influence. This difference in responsibility and position can make relationships and connections with other users in the server more complicated. You may not be directly aware of it when you’re chatting with fellow server members, but there will be users in your community who are keenly aware of your status as a moderator. This scrutiny can affect how they approach becoming friends with you as well as affect how they view your own relationships with other server members. Always keep this dynamic in mind and be aware of how your position may affect not just how users interact with you but also how they interpret your relationships and conversations with other members.
Just as it is natural for these relationships to form, it is also human nature to unconsciously develop and act on a bias toward the people closest to you. As a moderator, that natural bias is something you must actively resist, and take conscious steps to avoid. What happens when the friend of a moderator has a bad day and doesn’t act in the spirit of the rules of the community? In an ideal scenario, the moderator’s response would be the same reasonable response that would be expected if the offending member were anyone else. Your response to these situations will have a profound impact on your community’s attitude toward you as a moderator, as showing favoritism will quickly evaporate the community’s trust in your ability to be impartial. Moderators are human, and for inexperienced and seasoned moderators alike, this kind of scenario can prove to be one of the most significant tests of their ability to manage conflict.
In preparing for this scenario, the most important tool in a moderator’s arsenal is self-awareness. It is the burden of a moderator that this commitment comes above any interpersonal relationships that may form during time spent engaging with a community. Being ever-mindful of your responsibility and role in a community can help temper the depth of the relationships that you build.
As a recommended best practice, moderators should be careful about building interpersonal relationships of depth (close or romantic relationships) in the communities they moderate, including with other moderators. The only guaranteed way for a moderator to remain impartial in upholding the rules for all members is to exclusively maintain friendships within their community, but this isn’t always reasonable for communities that you are closely involved in. Should you find yourself in a difficult scenario involving a member with whom you have a close interpersonal relationship, here are some best practices for managing the situation:
The first step in successfully managing a scenario that involves someone you have an interpersonal relationship with is to take stock of your own investment. How are you feeling? Are you calm and capable of making rational judgment? Is your gut reaction to jump to the defense of the member? Or is the opposite true - do you feel the need to be overly harsh in order to compensate for potential bias? Carefully self-evaluate before proceeding with any action. The wrong type of moderator response in a scenario like this can often exacerbate or distract from the actual issue at hand, and potentially weaken your community’s trust in your capabilities as a moderator.
If in the course of your self-evaluation you realize that you cannot positively answer any or all of these questions, it may be necessary for you to more seriously evaluate whether or not you need to make difficult decisions regarding your position as a moderator. If your interpersonal relationship is preventing you from fulfilling your duties as a moderator, you may need to consider either abdicating your role as a moderator or ending the relationship until circumstances improve. Neither option is easy or ideal, but making tough decisions for the health of the community is your primary responsibility as a moderator.
Once you’ve determined that you’re capable of proceeding with moderation, evaluate the scenario to identify what the problem is and whether it immediately needs to be addressed. If there is no immediate need to step in, as a best practice it is usually better to defer to another moderator whenever your personal relationships are involved. Contact another member of your moderation team to get a second opinion and some backup if necessary.
If immediate action is required, a concise and direct reference to the rules is usually sufficient to defuse the situation. Use your best judgment, but be aware that the likelihood of “rules lawyering” is higher with someone who trusts you or sees you as a friend in these scenarios because moderation action can be seen as a violation of that trust or relationship. Clearly and fairly indicating the grounds for you speaking up is crucial to prevent further issues from arising.
Additionally, be careful about what is discussed in private with the person involved in this scenario following any action. There is a higher likelihood of them contacting you via DM to talk about your decisions because of the level of trust that exists between you. As a best practice, it is usually best to avoid litigating the rules of the server with any member, especially a member with which you have an interpersonal relationship. Politely excuse yourself, or if prudent, redirect the conversation by giving the member a place to productively resolve their own issue.
As with any moderation action, once taken it is best practice to leave a note for your team about what action was taken and why. Another period of self-evaluation is a good idea after any action is taken. Ask yourself, was the action taken in alignment with the rules of your community? Was it fair to both the offending member, as well as the other members of your community? Was your decision affected by your bias towards the offending member? If necessary or unclear, ask your teammates for their outside perspective.
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Verification Actions
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
Taking moderation action when the offending member is one with whom a moderator has an interpersonal relationship can be one of the most difficult scenarios that a moderator can find themselves in. Set yourself up for success as a moderator by tempering the type of relationships you build within your community and cultivating the ability to self-evaluate. The best tool available to a moderator in these scenarios is self-awareness and the ability to recognize when their own biases prevent them from acting fairly. Remember that moderation is a team sport, and that team is your most valuable resource in impartially upholding the rules and values of your community.
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text Filters
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Anti-Raid
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
User Filters
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.