The Guide to Trust & Safety: Safety by Design

By
April 12, 2022
Safety by Design

When platforms build a product with Safety by Design as a guiding principle, Trust & Safety teams can better protect users from the start. In this article, we share seven features that teams should consider incorporating when designing their platforms.

trust and safety in the online world

Safety by design is the principle that prioritizes building a product with safety at its center. The goal of this principle is to prevent harm before it occurs, rather than implement remedies after the fact.  In the Trust & Safety industry, safety by design should be a guiding principle from the start of a platform’s creation. By putting safety at the forefront of product decisions, Trust & Safety teams will reap the benefits in the long term, keeping users safe, and ultimately, making teams’ jobs easier. 

In this blog, we’ll review the main principles of safety by design, as well as provide seven features that can be easily implemented into a platform’s design to make it safer.

Safety by Design Fundamentals

In practice, safety by design means that product development should have a human-centric approach. According to the eSafety commissioner, an Australian regulatory agency for online safety, safety by design must be embedded into the culture and ethos of a business. Stressing practical and actionable methods, the eSafety commissioner believes that safety by design can be achieved for all platforms of all sizes and stages of maturity. 

Here are the three fundamental principles that makeup safety by design:

1. Service provider responsibility

The burden of safety is on the service provider, and not on the user. 

2. User empowerment and autonomy

The dignity of users is the most important. In practice, this means that a product should serve the user, putting their interest first.

3. Transparency and accountability

The way to achieve safety is with transparency and accountability.

With this understanding of safety by design, we’ll dive into seven features your platform can implement to ensure the safety of your users. 

Product Features

With the following product features, platforms can build a safe platform from the start. 

1. Age Protection

Age verification mechanisms can ensure that only those who are old enough can gain access to your platform or service. An example of an age verification process is a form where a user enters their name and date of birth and uploads an identifying document. Generally, this feature is implemented during the sale or sign-up process of a platform. 

The ability to identify children allows a platform to implement protections for users. For example, a platform can limit access to specific features and show only age-appropriate content to users. An additional feature to protect young users is granting parental control of a service. This feature will help companies meet legislation requirements, such as the United Kingdom’s upcoming Online Safety Bill.

2. Reporting

A mechanism where users can report abuses is crucial to every platform. The following questions should be asked when assessing this mechanism:

  • Is the reporting system easy to find and use?
  • How to ensure that relevant items are reported?
  • Is the category selection clear and exhaustive?
  • What is the process after an abuse is reported?
  • What is the response time?
  • Can a user appeal a decision?

3. Content Moderation Tools

On platforms with user-generated content, content moderation tools can be implemented to stop abuses. Threats such as CSAM, illegal goods, or terrorist content can either be removed automatically or flagged for human review with harmful content detection

A digital globe made of blue lines and dots, representing global connectivity, is superimposed on a close-up of a keyboard. Various digital icons, such as email, heart, user, and wifi, are floating around the globe, symbolizing the interconnectedness of the digital world.

4. Muting and blocking

Basic tools can allow a user to restrict interactions with another user. Blocking, muting and limited or restricted viewing lets users to decide who and how they want to interact with another user. 

5. Hiding and preventing content

With the right features, exposure to harmful content created by problematic users can be swiftly dealt with. Platforms should be able to hide specific, or all content pieces generated by malicious users. By internally flagging or labeling content can temporarily limit exposure or permanently delete it. For more minor cases or grey areas, visibility or discoverability can be reduced. 

Going a step further, platforms should have a mechanism that can prevent new harmful content from being shared. For example, platforms should be able to block ongoing abusers from logging in.

6. Platform Policies

Effective, comprehensive, and exhaustive policies must be in place for Trust & Safety teams to take action. Called community guidelines, terms of use, or policies, among other terms, guidelines allow teams to take action against abuses. For guidance on building platform policies, read our Trust & Safety Policy review

7. User consent

Consensual software allows for a user to explicitly say “yes” to interact with a platform. User interactions with platforms can be in its UX, software engineering, and data storage. Throughout the platform, information should be provided in order for users to make an educated decision if they do or do not want to opt-in to features, activities, or data sharing. Default settings can be built with a “bias” towards privacy, as well as asking permission before interacting with anything potentially harmful.

As we’ve learned, technology companies have the responsibility to protect users and build features within their platforms to enforce their protection. With these simple features, platforms can create safer online spaces by giving users more control, implementing preventative measures, and ensuring that proper responses to abuse are in place. 

Table of Contents