The Curriculum

The Curriculum

Introduction

303: Facilitating Positive Environments

⚠️ Content Warning: This article contains sensitive terms which are offensive and often used by bad actors in communities for the purpose of harassment. The purpose of displaying them here is to provide context for what these terms mean for those unfamiliar with them, which will in turn allow moderators to make appropriate decisions of which content to filter in their communities and foster environments where everyone feels welcome.

Introduction

The core foundation of a server on Discord is the community that populates it. Your community is what you engage with, protect, and grow over time. Engagement is important to focus on, but it’s just as important to make sure you are facilitating positive and welcoming engagement.

Why Positive Environments are Important

Positive engagement can mean a lot of things, but in this article, we will be referring to the way in which moderation can affect the culture of the server you are moderating. As moderators your policies, knowledge of your community, and deductive skills influence the way in which your community engages with each other and with your team.

When you establish and nurture your community, you are growing a collective group of people who all enjoy at least some of the same things. Regardless of your server topic, you are undoubtedly going to have members across different a variety of ethnicities, sexual orientations, and identities from across the world. Ensuring that your space on Discord is a space for them to belong necessitates making it safe for them to feel like they can be themselves, wholly, and without reservation. Your members are all humans, all community members, all people that deserve respect and deserve to be welcomed.

Establishing Community Boundaries in Moderation

When you are establishing your community, it’s important to have a basic understanding of what kind of environment you would like your server to be. It’s good to break down the general moderation philosophy on what content and discussion you’d like your community to engage in and what content would be inappropriate given the space. Depending on the topic of your server these goals may be different, but some common questions you can ask to establish general boundaries are:

  • What is the main topic of my server? When you’re thinking about the community and their impact on the growth of your server, it’s important to deduce what kind of server you want to build on a base conceptual level. If, for example, you are creating a politically-driven server, you might have different limits and expectations content and conversation wise for your community than a server based on Tetris or pets.
  • What topics do I expect users to engage in? Some servers will have the expectation that members will be allowed to discuss more sensitive or controversial and thought provoking topics, while others may feel as if these kinds of heavy debates are out of place. Video game servers tend to have a no-politics rule to avoid negative debates and personal attacks that are beyond the scope of the video game(s) in question. Servers centered around memes, irl, or social communities can be much more topical and have looser rules, while servers centered around mental health or marginalized communities can lean towards a stricter on-topic only community policy.
  • What would I like to foster in my community? While knowing what to avoid and moderate is very useful, having an idea of what kind of atmosphere you’d like the server to have goes far in setting the mood for the rest of the community at large. If users notice moderators are engaging in good-faith and positive conversations and condemning toxic or hateful discussion, it is more likely that your users will join in and participate in that positive conversation. If they see you and your mod team have taken the initiative to preserve the good atmosphere of the community, they are moved to put in the effort to reciprocate.
Characteristic
Add Verified Role
Remove Unverified Role
Bypassing the server verification level
Users will be subject to the server verification level until they verify.
Users will not be subject to the server verification level on joining, but will be after they verify.
Role Permissions
The @everyone role should have no permissions, the Verified role should have the permissions you would normally give to @everyone.
The @everyone role should have normal permissions. The Unverified role should have no permissions.
Channel Permissions
Separate instructional & verification channels

@everyone role

✔ Read Messages in both channels

❌ Add Reactions in both channels
✔ Read Message History in instructional

❌ Send Messages in instructional

✔ Send Messages in verification
❌ Read Message History in verification

Verified role

❌ Read Messages in both channels

❌ Add Reactions in both channels

Combined channel
@everyone role

✔ Read Messages
✔ Read Message History

✔ Send Messages

❌ Add Reactions

Verified role

❌ Read Messages

❌ Add Reactions
Separate instructional & verification channels

@everyone role

✔ Read Messages in both channels

❌ Add Reactions in both channels
✔ Read Message History in instructional

❌ Send Messages in instructional

✔ Send Messages in verification
❌ Read Message History in verification

Verified role

❌ Read Messages in both channels

❌ Add Reactions in both channels

Combined channel
@everyone role

✔ Read Messages
✔ Read Message History*

✔ Send Messages

❌ Add Reactions

Verified role

❌ Read Messages

❌ Add Reactions

*Unless you are using the channel description for verification instructions rather than an automatic greeter message.

If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.

Verification Actions
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.

Characteristic
Interview
Click Reaction
Command in Channel
Command in DM
Ease of use
Requires the most effort from both moderators and users
Simple to perform, cannot mistype anything
More complex, users must read and type more carefully
Interaction with Server Verification Level
Users subject to server verification level (if using the Verified role method)
Users not subject to server verification level
Effectiveness
Extremely effective at deterring trolls from reaching the rest of the server
Users do not need to read instructions/rules as closely to understand what to do
Encourages users to read the verification message carefully
Encourages users to read the verification message carefully, but DM may not go through
Visibility
Moderators are directly involved
Moderators are unlikely to notice user action.
User action is visible to moderators
User action is not visible to moderators
Simplicity of setup
While the involvement of bots may be minimal, writing interview questions and determining evaluation criteria could be complex
Requires only a single #welcome type channel with instructions to click the reaction
Can require either only a single channel or two channels depending on preference
Does not require any channels, unless you want a backup verification method for users that have DMs disabled

In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.

Evaluating Types of Harmful Rhetoric

This section will be more specific and will break down the most common ways in which a user can engage in harmful rhetoric, how to de-escalate discussions that attack marginalized communities, and how to properly address uncommon symbols used to attack communities.

Harmful Terms and Ableist Language

Members of your community may use obscure symbols or terms to send an offensive message while avoiding any blatant attention or triggering filters. While some of these will be used under the guise of being “internet culture” it’s important to be cognizant that these symbols and language can cause a lot of pain to many marginalized people. Understanding and approaching these terms seriously will help mitigate any long-term server culture damage and personal damage this kind of behavior can cause. This will not only be symbols, but popular terms that are used to cause intentional discourse and push harmful narratives.

These are all terms and symbols used to specifically target and belittle groups of people, and are harmful to the growth of a welcoming environment for your community. Some are slurs, while some are generally harmful rhetoric;

Ableist Terms

  • Autistic, Autist, Retard, etc: Very common ableist terms used to insult users on their intelligence. Commonly used as slurs to attack neurodivergent people, and should be avoided if possible.
  • To be particular, ‘retard’ is a slur specifically to attack neurodivergent people. ‘Autistic’ can be used neutrally by autistic people to refer to themselves, and it should only be moderated if it is being used as an insult.
  • Handicap/Mentally Retarded/Defective: Similar to above, used to refer to users in an accusatory manner, particularly to attack their intelligence or capabilities as people. Can also be used in an indirect manner with just as much harmful subtext; for example calling a character in a game a “wheelchair character even the mentally defective could play”.

Racist Terms

  • Jap: Used during World War II when the Japanese were in internment camps in the US, this term was a derogatory way US citizens referred to Japanese people and is heavily considered an ethnic slur against Japanese people.
  • Gypsy/Gypped: Both to refer to ‘Gypsy’ the people, and to be robbed/conned in the form of ‘gypped’, this is a term specifically used as an ethnic slur against the Romani people. While it is used in legal contexts, the words have slowly been brought out of use due to its common use as a slur historically.
  • Chink/Ching Chong: Chink has been historically used as a slur against people of Chinese descent, and sometimes even Asian decent widely, with ching chong mocking the language of the Chinese which is commonly used alongside chink.
  • Triple Parentheses, also known as (((echo))): This is a very uncommon but recently used symbol to denote someone of Jewish origin, typically in a way to target or harass them. This symbol is used to single Jewish people out by communities and places a target on their back for their religion or ethnicity, and should not be tolerated.

LGBTQ+ Specific Slurs

  • Dyke/Lesbo: A term originated as a slur against more masculine-presenting lesbian women, this term has been reappropriated by its community into being a common slang term to refer to lesbian women. While some people would not mind being called a dyke, it should be made aware of it’s possible negative downsides for people who may be uncomfortable with the term.
  • Thing: Specifically in reference to pronouns, the use of ‘thing’ instead of a user’s preferred pronouns, used to usually mock a user’s preferred way of expressing their gender identity, is commonly used as a way to invalidate or minimalize trans/enby people.
  • Fag/Faggot/Homo: All terms used to refer to gay people, and all heavy slurs with the intention of belittling and attacking them. These words are also commonly used in the real world when they are attacked, and should not be taken lightly.
  • Trap: A term that originated from anime, this word is in reference to men who dress as females and look female-presenting, and ‘trap’ heterosexual people into having an attraction to them. This word has been used out of its original contexts as a slur to transgender people, as if their existence is to ‘trap’ or ‘trick’ people around them. Not everyone finds this term offensive, and definitely should be evaluated as a team if it deserves moderation on a case-by-case basis.

While some of these terms may be popular in certain spaces (such as gaming), it’s important to understand the history and weight behind them, and think accordingly about their place in your server long-term.

Element
Description
Title
The text that is placed above the description, usually highlighted. Also directs to a URL, if given.
Description
The part of the embed where most of the text is contained.
Content
The message content outside the embed.
URL
The link to the address of the webpage. Mostly used with the thumbnail, icon and author elements in order to link to an image.
Color
Color of your embed’s border, usually in hexadecimal or decimal.
Timestamp
Time that the embed was posted. Located next to the footer.
Footer
Text at the bottom of the embed.
Thumbnail
A medium-sized image in the top right corner of the embed.
Image
A large-sized image located below the “Description” element.
Author
Adds the author block to the embed, always located at the top of the embed.
Icon
An icon-sized image in the top left corner of the embed, next to the “Author” element. This is usually used to represent an Author icon.
Fields
Allows you to add multiple subtitles with additional content underneath them below the main “Title” & “Description” blocks.
Inline
Allows you to put multiple fields in the same row, rather than having one per row.

Markdown is also supported in an embed. Here is an image to showcase an example of these properties:

Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:

An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:

  • Embed titles are limited to 256 characters
  • Embed descriptions are limited to 2048 characters
  • There can be up to 25 fields
  • The name of a field is limited to 256 characters and its value to 1024 characters
  • The footer text is limited to 2048 characters
  • The author name is limited to 256 characters
  • In addition, the sum of all characters in an embed structure must not exceed 6000 characters
  • A webhook can have 10 embeds per message
  • A webhook can only send 30 messages per minute

If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.

It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.

Creating an LGBTQ+ Friendly Environment

Online platforms tend to have a large amount of hateful content and rhetoric used against marginalized groups of people. When crafting a community, there has to be a common goal of acceptance and welcoming that you provide for all of your members. In online communities, it is not uncommon for users to voice their disdain at other users for their choices in pronouns, gender presentation, or anything that relates them to the LGBTQ+ community.

What is an Ally?

An ally is someone who is not a part of the LGBTQ+ umbrella who supports and ‘allies’ with the community to create an open and welcoming atmosphere in your server. It is important to understand your role in being an ally to your community and to your users.

What are Pronouns?

Pronouns are what people use to refer to a person without directly stating their name. Common pronouns are They/Them, She/Her, and He/Him. There are many others that are not covered here, but pronouns can be very important to someone’s identity and how they’d prefer to be addressed. Users, at one point or another, may make jokes such as calling users incorrect pronouns intentionally to invalidate their identity, dehumanize them, and humiliate members of the community out of their ability to interact within the server.

Additionally, it’s worth noting that comments such as ‘there are only two genders’ are used to directly disrespect and undermine the trans community, without directly appearing confrontational. This is used to skirt by rules by appearing to be much less antagonizing than these words truly are.

What are Important LGBTQ+ Terms to Know?

There are a few LGBTQ+ specific terms that are good to be aware of when interacting with LGBTQ+ members of your community. It’s important to also keep up with your communities- it can be helpful to do some research as terms and issues pop up. Quickly Googling a new term that you come across from a member can make them feel much more welcome in the community as a whole and address sore spots as well.

  • Enby/NB/Genderqueer: Non-binary, a term to describe a person whose gender identities don’t fit into the gender binary (female and male). (NB is an acronym that is also used for “non-black” in some contexts)
  • AMAB/AFAB: Terms to describe what gender someone was originally born as, usually in contrast to what gender they present as now. AMAB is ‘Assigned Male at Birth’ meaning that someone's birth certificate says “male” on it, and AFAB is ‘Assigned Female at Birth’ meaning that someone's birth certificate says “female” on it.

These are also some terms to be aware of that are related to LGBTQ+ issues that you may see being brought up:

  • Chaser: A term to describe a cisgender person who fetishizes or objectifies transgender people (most often transgender women), and seeks out relationships with them.
  • TERF or Trans Exclusionary Radical Feminist: A term for gender-critical individuals who consider themselves feminists who do not acknowledge transgender women as women and promote the exclusion of trans women from women's spaces and organizations.
Hypothesis
Chart/Table Affected
Expected Result
Removing invite links from less relevant traffic sources will decrease server growth.
  • How many new members are joining?
  • Total membership over time
  • Most popular invites/referrers
  • New members joining decreases
  • Total membership over time grows more slowly or decreases
  • Total joins from the invite link/referrer decreases
Adding or promoting an invite link on a relevant traffic source will increase server growth
  • How many new members are joining?
  • Total membership over time
  • Most popular invites/referrers
  • New members joining increases
  • Total membership over time increases more quickly or decreases more slowly
  • Total joins from the invite link/referrer increases
Improving the overall quality of referrers will attract people that are more likely to stay on your server and engage with your community
  • Server leaves over time
  • How many new members successfully activate on their first day?
  • Members remain on your server after joining, decreasing server leaves over time from new members
  • Members are more likely to want to engage with your server prior to joining, and will be more likely to talk or visit multiple channels
Webhooks
Bots
Function
  • Can only send messages to a set channel.
  • They can only send messages, not view any.
  • Can send up to 10 embeds per message.
  • Much more flexible as they can do more complex actions similar to what a regular user can do.
  • Bots are able to view and send messages.
  • Only one embed per message is allowed.
Customization
  • Can create 10 webhooks per server  with the ability to customize each avatar and name.
  • Able to hyperlink any text outside of an embed.
  • Public bots often have a preset avatar and name which cannot be modified by end users.
  • Cannot hyperlink any text in a normal message, must use an embed.
Load and security
  • Just an endpoint to send data to, no actual hosting is required.
  • No authentication that data sent to webhook is from a trusted source.
  • No authentication that data sent to webhook is from a trusted source.If webhook URL is leaked, only non-permanent problems may occur (e.g. spamming)
  • Easy to change webhook URL if needed.
  • Bots have to be hosted in a secure environment that will need to be kept online all the time, which costs more resources.
  • Bots are authenticated via a token, compromised token can cause severe damage due to their capabilities if they have permissions granted to them by the server owner.
  • However, you can reset the bot token if needed.

Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.

Spam Type
Mee6
Dyno
Giselle
Gaius
YAGPDB
Carl
Gearbot
Fast Messages
No
Yes
Yes****
Yes
Yes
Yes
No
Repeated Text
Yes
Yes
Yes****
Yes
Yes
No
No
Newline Text
No
No
Yes****
Yes
No
No
No
Mentions
Yes
Yes
Yes
Yes
Yes
Yes
Yes*
Links
Yes*
Yes*
Yes*
Yes
Yes***
Yes
Yes
Invites
Yes*
Yes*
Yes*
Yes
Yes
Yes
Yes
Images
No
Yes
Yes
Yes**
No
Yes
No
Emoji
Yes
Yes
No
Yes
No
No
No

*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance

**Gaius also offers an additional NSFW filter as well as standard image spam filtering

***YAGPDB offers link verification via google, anything flagged as unsafe can be removed

****Giselle combines Fast Messages and Repeated Text into one filter

Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.

There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.

Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.

Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.

Text Filters
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.

FIlter
Mee6
Dyno
Giselle
Gaius
YAGPDB
Carl
Gearbot
Banned words
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Whitelist
No
No
Yes
Yes
Yes
No
Yes
Templates
No
Yes
No
Yes
No
No
No
Immunity
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Banned Links
Yes*
Yes*
No
Yes
Yes*
Yes***
Yes
Whitelist
Yes
No
No
Yes
Yes**
Yes***
Yes
Templates
No
No
No
Yes
Yes**
Yes***
No
InvitesNo
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Extras
Zalgo
Selfbot
Regex
Regex
Files
No

*Defaults to banning ALL links

**YAGPDB offers link verification via google, anything flagged as unsafe can be removed

***Setting a catch-all filter with carl will prevent link-specific spam detection

A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.

A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country Nigeria’ they could get in trouble for using an otherwise acceptable word.

Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.

Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.

Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.

Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.

Anti-Raid
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.

Mee6
Dyno
Giselle
Gaius
YAGPDB
Carl
Gearbot
Raid detection
No
No
Yes
No*
No
No
No
Raid prevention
No
No
Yes
Yes
No
No
No
Raid-user detection
No
No
Yes
Yes
No
No
No
Damage prevention
No
Yes
No
Yes*
No
Yes
No
Templates
No
No
No
Yes
Yes**
Yes***
No
Raid anti-spam
No
Yes
Yes
Yes
Yes
No
No
Raid Cleanup
No
Yes
Yes
Yes
Yes
Yes
Yes

*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.

Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.

Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.

Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.

Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.

Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.

Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.

User Filters
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.

Filter
Mee6
Dyno
Giselle
Gaius
YAGPDB
Carl
Gearbot
Bad words
No
No
No
Yes*
Yes
No
No
Spam
No
No
No
Yes*
Yes**
No
No
Hoisting
No
No
No
Yes*
Yes**
No
No

*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name

**YAGPDB can use configured word-list filters OR a regex filter

Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.

Moderating Hateful Content

When it comes to the content you allow or moderate in your server, it’s important to, again, reflect on what type of community you are. It’s also important that you act quickly and precisely on this type of harmful behavior. Some users will slowly push boundaries on what type of language they can ‘get away with’ before being moderated.

When discussing moderation, a popular theory that circulates is called the broken windows theory. This theory expresses that if there are signs of antisocial behavior, civil unrest and disorder, as well as visible signs of crimes in the area, that it encourages further anti-social behavior and crime. Similarly, if you create an environment in which toxic and hateful behavior is common, the cycle will perpetuate into further toxicity and hatefulness.

Hypothesis
Chart/Table Affected
Expected Result
Enabling or adjusting the Welcome Screen will guide users to the right introduction channels and encourage engagement
Welcome Screen
  • All metrics
Growth & Activation
  • How many new members successfully activate on their first day? - % visited more than 3 channels
  • Users click on each channel in equal proportion and send messages in equal proportion afterwards
  • % visited more than 3 channels will increaseFirst day activation increases
Streamlining the channel and role structure will make the server less overwhelming to new users and encourage participatio

nand/or

Greeting people upon joining the server in a general chat channel will encourage them to respond and participate in the community
Growth & Activation
  • How many new members successfully activate on their first day?
Engagement
  • How many members visited and communicated?
  • Message activity
  • Which text/voice channels do people use the most?
  • % talked (voice or text) will increase
  • % communicators will increase
  • Message activity will increase
  • Channels that are made opt in, require privileged access, or moved to the bottom of the channel list will have less engagement than other channels
  • The channel with greet messages will have an increased number of readers and, if send messages is enabled, a greater number of messages and chatters
Implementing a news feed announcement channels with role notifications will encourage people to check the announcement channel regularly
Growth & Activation
  • How many new members retain the next week?
Engagement
  • Which text/voice channels do people use the most?
  • Members retained will increase
  • Readers on the announcement channel will increase
Implementing community engagement campaigns will improve activity
The measurement and expected results of each community engagement campaign will vary based on the exact nature of the campaign. However, you can expect that they will improve some combination of first day activation, user retention, and/or percent communicators within your server.

One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.

However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.

What is Bad-Faith Content vs. Good-Faith Content?

‘Bad-faith’ content is a term that describes behavior done intentionally to cause mischief, drama, or  toxicity to a community. They are also commonly referred to as bad actors, and are the type of people that should be swiftly dealt with and addressed directly.

‘Good-faith’ content is a term that describes user behavior with good intentions. When users are a positive foundation in your community, the members that join and interact with the established community will grow to adapt and speak in a way that continues the positive environment that has been fostered and established. It’s important to note that while ‘good-faith’ users are generally positive people, it is possible for them to state wrong or sometimes even harmful words. The importance of this distinction is that these users can be educated from their mistakes and adapt to the behavior you expect of them.

When users toe the line, they are not acting within good faith. As moderators, you should be directly involved enough to determine what is bad-faith content and remove it. On the other hand, education is important in the community sphere for long term growth. While you can focus on removing bad behavior from bad-faith users, reform in good-faith community members who are uneducated in harmful rhetoric should also be a primary goal when crafting your community. When interacting in your community, if you see harmful rhetoric or a harmful stereotype, step back and meaningfully think about the implications of leaving content up in channels that use this kind of language. Does it:

  • Enforce a negative stereotype?
  • Cause discomfort to users and the community at large?
  • Create a negative space for users to feel included in the community?

Ideas to Help Prioritize Inclusivity

  • Allowing users to have pronouns on their profile. Depending on your server, you may choose to have pronoun roles that members can directly pick from to display on their profile. This is a way to allow users to express their pronouns in a way that doesn’t isolate them. When creating a larger, more welcoming system for pronouns, it is much harder to decide who has pronouns because they are LGBTQ+, because they’re an ally, or just because it was part of setting up their roles. When servers have pronoun systems built into them, this can also allow for a community-wide acceptance of pronouns and respect for other users’ identities, and can deter transphobic rhetoric.
  • Discourage the use of harmful terms. It’s no secret that terms such as ‘retard’ and ‘trap’ are used in certain social circles commonly. As moderators, you can discourage the use of these words in your community’s lexicon.
  • Create strong bot filters. Automated moderation of slurs and other forms of hate speech is probably your strongest tool for minimizing the damage bad actors can create in your server. Add variating ways people commonly try to skip over the filter as well (for example, censoring a word with an added or subtracted letter that commonly is used as a slur).
  • A good document to follow for bot filter and auto moderation as a whole is also in the Discord Mod Academy, which can be found here!
  • Educating your community. Building a community without toxicity takes a lot of time and energy. The core of all moderation efforts should be in educating your communities, rewarding good behavior, and making others aware of the content they are perpetuating.

A core way to handle all de-escalation stands in your approach. Users, when heated up during a frustrating or toxic discussion, are easy to set off or to accidentally escalate to more toxicity. The key is to type calmly, and to make sure with whatever manner you approach someone to de-escalate, you do it in a way that is understood to be for the benefit of everyone involved.

Closing

Creating a healthy community that leaves a lasting, positive impact in its members is difficult. Moderators have to be aware, educated, and always on the lookout for things they can improve on. By taking the initiative on this front, your community can grow into a positive, welcoming place for all people, regardless of their race, gender, gender identity, or sexual orientation.

Other Resources

Below are resources for further research and discussion on different types of slurs, symbols, and hate speech not referenced explicitly in this document.

Ready to test your moderator skills?

Take the Discord Moderator Exam!

Take the Exam