Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
Read our latest research on online grooming
At the Crimes Against Children Conference, I had the opportunity to connect with child safety and anti-trafficking experts as well as trust and safety professionals from around the world who are dedicated to child safety. It was an incredible time for meaningful discussions and potential collaborations, as we all shared a deep commitment to tackling critical issues. Even though we weren’t all direct colleagues, many of us are facing the same challenges, especially when it comes to ensuring child safety online.
Here are my main takeaways from the event, including insights into the growing threats of sextortion, the impact of generative AI (GenAI) on online safety, and the importance of collaborating to help solve these challenges.
A major takeaway from the conference was the growing threat of Generative AI in creating CSAM across various mediums like voice, text, image, and video. Attendees agreed that no single solution is sufficient to counter this risk. Instead, it requires a comprehensive approach that includes implementing tools and models to establish strong safeguards, removing unmoderated models, and educating minors on safe AI use. The aim is to quickly close the gap between emerging threats and the development of solutions, especially since AI-generated content can spread indefinitely with dire consequences.
The conference identified financial sextortion as a rapidly growing threat that lacks a comprehensive response. Attendees emphasized the need for a multi-layered solution that spans education, legal frameworks, and cross-sector collaboration. There’s a clear gap in understanding and addressing this issue, and urgent attention is required to develop strategies that can effectively prevent, identify, and combat financial sextortion across various platforms and communities.
The conference shed light on the urgent need for more resources to address child safety violations coming from the Asia-Pacific (APAC) region. Although these countries are major producers of CSAM, they often struggle to combat online threats due to other societal challenges like poverty and violence. The session underscored the global impact of local issues and called for nuanced strategies that consider the specific needs and conditions of these developing countries to improve safety standards and enforcement on a global scale.
The conference also stressed the importance of broadening the focus beyond sexual exploitation to tackle other serious online threats to minors. These include child trafficking, illegal adoption, child labor, privacy violations, cyberbullying, and hazardous social media challenges. Though less often recognized, these issues can have severe impacts on the lives of minors, sometimes even more so than sexual exploitation. The intersection of child safety and human exploitation demands urgent attention and a dedicated approach to protecting minors on all these fronts.
The conference clarified, without ambiguity, that synthetic (or AI-generated) CSAM is illegal in the United States. A key session explained that this type of content is prosecutable under existing laws, specifically U.S. Code Title 18 Section 1466A, which covers obscene visual representations of child sexual abuse. This includes CGI-generated images that depict, or appear to depict, minors in sexually explicit conduct. The discussion stressed that a solid legal framework is already in place to address and prosecute these violations effectively.
Another key takeaway was the essential role of effective collaboration in combating decentralized online threats. To effectively address them, platforms must have interconnected Trust and Safety teams. These teams face a complex challenge: threat actors often operate across multiple platforms, making it difficult to gain a full picture of their activities. By sharing information and context, Trust and Safety teams can save time on investigations, gain a more complete understanding, and enhance their overall efficiency in protecting users.
While AI is a powerful tool, it can’t replace the deep understanding of seasoned professionals from fields like criminology, sociology, human rights, and security studies. These experts are vital for identifying new circumvention tactics, terminology, and emerging threats. For AI to be truly effective, it must work hand-in-hand with human expertise. Researchers, engineers, and analysts need to work closely to translate their insights into scalable detection systems, allowing for proactive and effective responses to evolving online threats.
Effective collaboration between technology companies and law enforcement is incredibly important in tackling online child exploitation. The conference underscored that law enforcement struggles to prosecute every instance of CSAM due to the intensive resources needed for evidence gathering. Tech companies can help by providing comprehensive information packages, making it easier for law enforcement to investigate and prosecute cases with less effort. This partnership is crucial for building strong cases against online predators and streamlining the process from evidence collection to conviction.
Everyone, from Trust and Safety professionals to law enforcers to researchers, walked away from this year’s Crimes Against Children Conference reminded that collaboration is the most vital component to combating child harm online.
At ActiveFence, our approach to child safety involves combining advanced AI with human expertise to swiftly identify and mitigate threats. As we continue to lead in this essential work, I invite you to read up on our latest research to see how we’re advancing the fight against online exploitation. You can also schedule time to talk with our team of experts.
See you next year.
Learn more about ActiveOS with a free demo
Learn more about financial sextortion, a severe form of exploitation where explicit images or information are used to extort money, has become a critical global concern.
Understand why image hash matching alone isn't enough to detect novel CSAM in the GenAI era, and how an AI-driven approach provides enhanced protection.
Discover the hidden dangers of non-graphic child sexual exploitation on UGC platforms and learn to combat these pervasive threats.