January 18, 2024

How We Put Safety at the Center of Everything We Build

Risk is an inherent part of communicating in the digital world. At Discord, one way we mitigate those risks is through our approach to safety. For us, safety considerations aren’t just a “step” or an item on a to-do list — they’re fully integrated into our design process.

As a result, we evaluate safety considerations as we build products and features from day one.

What it means to start with safety

As soon as we start building something new, we assess the risks that might come from it. By starting at the concept phase, we can factor safety into everything we do as we design, build, launch, and manage a new product.

Throughout the process, our teams weigh in with specialized knowledge in their domains, including teen safety, counter-extremism, cybercrime, security, privacy, and spam. Functionally, this is how we pinpoint any gaps that may exist as we build the products and features that make Discord a fun, safe place to hang out with friends.

Four key questions for building safer products

Any time we build something, we prioritize four key questions to determine our approach to building safer products:

  • How will we detect harmful activity or content, and balance the need for safety, security, and privacy? 
  • How will we review the content or activity we detect? This includes understanding how we will weigh it against our policies, guidelines, or frameworks, and how we will determine the severity of the harm.
  • How will we enforce our rules against actions that we find to be harmful?
  • How will we measure the outcomes and efficacy of decisions? We want to ensure our safety efforts work well, that we reevaluate them when we need to, and that we hold ourselves accountable for delivering safe experiences for our users.

There are some things we consider with each product. We’ll look, for instance, at our data to learn more about the most common kinds of violations that are related to what we’re building. That can help us understand the most harmful and most abundant violations we know about today.

But our products don’t all have the same behaviors or content, so we can’t just copy-paste our work from one tool to another. Think of the Family Center or voice messages, for example. They have unique purposes and functions, so we needed different answers to each of the questions above as we built them. By starting with these questions on day one, we were able to make the right decisions for each tool at each step of the process and maximize our approach to safety in the end.

By following this approach, everyone involved in making our products has a clear-eyed and honest understanding of the risks that could potentially arise from our work. We make intentional decisions to minimize those risks and commit to only releasing something if it meets our high standard of safety.

What starting with safety looks like in practice

Let’s look a little deeper at voice messages, the feature that lets you record audio on your phone and easily send it to your friends. Here’s a brief rundown of how we incorporated safety conversations into every aspect of bringing this fun tool to life.

At the earliest stage, the product team drew up preliminary specifications and consulted with trust and safety subject matter experts on staff. The product manager shared the specs with individuals from the policy and legal teams, with a focus on flagging any concerns or discrepancies regarding Discord’s own policies.

Once initial feedback was incorporated, the product manager created an action plan, bringing on the engineers who began building the feature. Even during this stage, safety experts were part of the process. In fact, it was here that members of the safety team called for an additional requirement—a way for us to handle large amounts of audio messages. Just as users can report written messages, they also can report audio messages for content that violates our policies. Our safety teams needed to ensure we had a way to review those reports effectively as they flow in.

In this instance, the team had to figure out how the audio review process would even work. Would reviewers have to sit and listen to hours of reported messages each day? That didn’t seem viable, so the core team decided to use existing text tools that could transcribe reported messages. Now reviewers can evaluate the text quickly, and review the audio for additional context.

Once again, the product manager and engineers worked closely with the safety team to include building in this additional layer of safety.

Just because our voice messages feature is live doesn’t mean our safety job is complete. We need to understand how effective our methods are, so we’ve developed metrics that track safety and give us insight into the impacts of the decisions we’ve made.

We track, for instance, if we have unusual rates of flagged content or appeals on decisions, as well as the breakdown of various violations on our platform. We summarize these metrics in our quarterly Transparency Reports.

What it doesn’t look like

To better understand our safety-centric approach, consider the alternative.

A different platform could start with an innovative idea and the best intentions. It could build the product it thinks would be most exciting for users, and review the risk and safety implications when it’s ready to go live. If the company identifies risks at a late stage, it has to decide what to address before releasing the product to the world.

This can happen and is flawed. If you’re already at the finish line when you notice the risks you’ve created, you might realize you have to start over to do it right. Building a product this way is risky, and users are the ones most exposed to those risks.

How Discord starts with safety

At Discord, we understand that we have to think about safety and growth holistically.

Discord is built differently from other platforms. We don't chase virality or impressions. Discord makes money by providing users a great experience through premium subscriptions. Under this model, we can focus on our users and have more freedom to prioritize safety.

Put simply, we’re only interested in growing a platform that’s safe. We’re here to help people forge genuine friendships, and we know that can only happen when people feel safe connecting with one another.

Tags:
Communications
User Safety

Lorem Ipsum is simply