Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
In the first edition of the ActiveFence Policy Series, we take a look at the child safety policies of major tech companies and examine their core components. Download the complete report to learn how industry leaders protect their youngest users from online harm.
The process of creating robust comprehensive community guidelines or trust and safety policies is an ongoing one. To protect your platform’s users requires constant monitoring of ever-changing on-platform behaviors, shifting legal requirements, and competitive analysis of industry best practices. This is particularly true in the child safety space, where legislation and company policies work together to keep the most vulnerable users safe.
The following article and accompanying report are the first part of ActiveFence’s Policy Series and will provide an analysis of the policies and community guidelines that the biggest online platforms use to ensure child safety. These policies are broadly broken down into four categories, including CSAM, child abuse, bullying and harrassment, and self-harm.
Given the severity and legal ramifications of hosting and enabling the distribution of CSAM, company policies (that operate platforms hosting user-generated content) tend to be strict and relatively uniform. However, companies adjust certain aspects of their policies as necessary, depending on their user-base (age) and services provided.
Platform policies must be rigorous in order to keep children safe. Furthermore, as threats evolve over time, they must also be responsive and capable of meeting new challenges. This means that those seeking to create trust and safety policies for their platforms need to have a robust understanding of the digital environment, not only as it is today but also as it will be in the future.
Given the speed at which change occurs in online spaces and the sheer amount of information there is to process, this can be a complicated endeavor. ActiveFence will monitor these community guidelines and policies over time, regularly updating on and interpreting changes. By doing so, this report aims to help platforms make informed decisions regarding their approach to trust and safety and the rules that they put in place.
Complete with examples and divided by platform category, this guide provides useful insights into how various platforms—and types of platforms—work to keep predators from abusing their services and users.
The complete report features an analysis of four abuse areas: CSAM, child abuse, bullying and harassment, and self-harm. For each risk area, we will provide the responses of the five different types of digital platforms: social media, instant messaging, video conferencing, video sharing, and file sharing. Each type of platform comes with its own unique risk areas that dictate the necessities and requirements surrounding policies as related to CSAM.
Social media platforms host everything from text, images, videos, private messages, and public and private groups. The multiple risk areas for abuse require these platforms to strictly enforce guidelines ensuring the safety of their users and compliance with different national legislations.
Instant messaging platforms are also particularly vulnerable to being exploited by child predators looking to trade and access CSAM. As a result, platforms that offer instant messaging services—both text and images—must be active in combating CSAM on their platforms.
The COVID-19 pandemic saw the use of platforms providing video conferencing services growing exceptionally. Unfortunately, this popularity also made these platforms more susceptible to abuse, including the dissemination of CSAM. As a result, these types of platforms have enacted various measures and guidelines to mitigate this dangerous activity.
The very nature of video-sharing platforms inherently makes CSAM a concern for platform exploitation, either in the sharing of video and still image child pornography, as well as using the comments sections to sexualize minors depicted in innocent material.
File-sharing and cloud storage services are also used by child predators to store and distribute CSAM. While child predators mainly utilize dark web file-sharing platforms, the limitations of these alternative servers lead predators also to exploit mainstream and surface web platforms to ensure greater ease in access to these illegal files.
The best platform policies are responsive and should evolve as new threats arise and change over time. As policies must evolve and be shaped continuously, our team will continue to monitor all relevant changes and developments in the trust and safety ecosystem to provide updates as and when policies change.
Our comprehensive reports detail how some of the market leaders in the online space are currently addressing the threat of CSAM on their platforms.
Learn 8 key insights from the Crimes Against Children Conference, where child safety experts discussed sextortion, the impact of generative AI, and more.
Read about the latest updates in ActiveOS and ActiveScore that improve granular PII detection and enhance protection against multiple threats.
Explore the alarming rise in online financial sextortion targeting minors - Discover the latest advanced detection methods, and strategies to combat this global threat.