Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
Avoid CSAM and Child Safety Violations on your platform
The UK-based Internet Watch Foundation (IWF) has declared 2023 as the “most extreme year on record” for child sex abuse online.
While this alarming statistic focuses on graphic child sexual abuse material (CSAM) like images and videos, data on non-graphic child sexual exploitation is almost nonexistent. This isn’t due to a lack of such material, but rather its elusive nature, which poses a significant detection challenge for user-generated content (UGC) platforms and their moderators.
Surprisingly, non-graphic child safety offenses are more prevalent on UGC platforms than their graphic counterparts. This is due to their complexity and subtlety, which make them exceptionally difficult to detect and often allow them to fly under the radar.
Non-graphic CSAM is a catch-all term that refers to two main types of content:
The Child Crime Prevention & Safety Center estimates that 500,000 predators are active online every day, putting millions of children at risk.
These offenders commit a multitude of non-graphic child sexual exploitation offenses, which offer distinct advantages in avoiding detection. The subtlety of this material makes it harder to detect compared to graphic imagery, allowing predators to communicate, interact with and exploit minors, and spread content more easily without getting caught.
Non-Graphic Child Sexual violations include –
Examples of child safety offenses detected online and combated by ActiveFence
UGC platforms use three main methods to detect and remove graphic CSAM and take action against users:
Why These Methods Don’t Work for Text-Based CSAM:
While less common than text-based abuses, audio-based abuses are explicit and theoretically easier to detect. However, these abuses often go unnoticed simply because platforms don’t monitor audio content, mainly due to language barriers. APAC countries, including China and Japan, are major sources of such content, making it difficult for moderators who may not be fluent in the languages used, so they face difficulty identifying and detecting abuses. Automated audio-based detection mechanisms also fall short because, like human moderators, they are not trained on the vast linguistic diversity involved.
While emerging technologies offer some hope in detecting non-graphic abuses, the larger issue lies in the lack of awareness among Trust & Safety teams. Without specific intelligence about the types of abuses occurring on their platforms and the particular threat actors producing them, teams struggle to effectively detect, thwart, and remove malicious content and accounts.
However, ActiveFence’s intelligence shows that detecting audio-based CSAM can be easier than previously considered. This content is typically produced and distributed by a small group of repeat offenders with distinct trademarks and characteristics. Much like legitimate music producers, these abusers flaunt their unique names within their audio clips – which makes it easy to train detectors to automatically identify tracks that contain these indicators.
To combat non-graphic CSAM, online platforms must adopt proactive measures, precise intelligence, and effective strategies.
Here are a few actionable tips to mitigate risks and prevent harm on UGC platforms:
Cross-Platform Research: One of the most essential steps is Identifying threat actors who operate across multiple platforms. For example, a predator might share non-graphic CSAM on a public social media platform and then redirect users to a private messaging app where they distribute graphic CSAM. By tracking CSAM violations across platforms, you can preemptively prevent risks and block them from migrating to your platform. Tracking predators at their source provides valuable insights into their behavioral patterns, enabling platforms to better detect these threat actors before they exploit their services.
Lead Investigations: While most CSAM material is automatically removed from platforms, conducting investigations on items and users removed for child safety violations is important. This allows you to monitor evolving tactics, techniques, and terminologies used by bad actors. Understanding these patterns enables platforms to prevent CSAM more effectively and stay ahead of predators’ constantly evolving strategies.
Product Flexibility: To detect, moderate, and remove CSAM at scale, use advanced tools and products like ActiveOS or ActiveScore. Building your platform guided by safety-by-design principles, prioritizing user safety from the outset and throughout all product development stages, ensures that safety measures are ingrained in the core of the platform. Remaining agile in adapting new technologies and incorporating features to improve detection and removal efficiency is also vital to staying ahead of offenders.
User Accountability: Documenting abuses and sharing them in a knowledge-sharing system is a proactive step in preventing threat actors from operating across platforms. Banning users and removing their content often doesn’t stop threat actors, as they return with new accounts or migrate to other platforms. By cooperating with local law enforcement and sharing evidence, platforms can help catch offenders, ensuring reliable deterrence and eradicating online child safety violations.
Effectively solving a complex and nuanced issue like non-graphic CSAM demands a deep understanding of the trends, pervasiveness, and tactics employed by bad actors. Safeguarding the most vulnerable users is a challenging task, one that requires precise intelligence and proactive measures. As such, partnering with experienced subject-matter experts can provide valuable assistance in effectively addressing these challenges.
Editor’s Note: The article was originally published on November 29, 2022. It has been updated with new information and edited for clarity.
Want to proactively prevent all forms of CSAM and Child Safety Violations from your platform?
Companies with online communities can help improve child safety online through these 5 actionable strategies from ActiveFence.
This is the first in a series of ActiveFence comprehensive reports laying out how market leaders have built their platform guidelines to secure trust and safety effectively on and in their platforms.
ActiveFence and INHOPE partner to fight the spread of CSAM online and promote the mental wellbeing of digital first responders.