Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
Discover how to detect and stop online child safety threats like grooming in our latest report
In 2022, a teenager in a small Michigan town was found dead, deeply shaking his community and sending shockwaves across the US.
The tragic case of 17-year-old Jordan DeMay, who took his own life after being ensnared in an online sextortion scheme, revealed an alarming rise in such cases. Scammers posing as a girl on social media coerced Jordan into sharing intimate photos and then blackmailed him. When he couldn’t meet their demands, the relentless threats led him to suicide.
Financial sextortion, a severe form of exploitation where explicit images or information are used to extort money, has become a critical global concern. Recent trends indicate a disturbing increase in these incidents that typically target vulnerable minors. ActiveFence’s data reveals a staggering 650% rise in financial sextortion schemes targeting minors, with at least 32 minors in the US having died by suicide after falling victim to sextortion.
Sextortion can be categorized into two primary types: sexually motivated and financially motivated. Sexually motivated sextortion primarily targets young girls, with perpetrators using fake or male profiles to elicit explicit content. Financially motivated sextortion largely targets young boys, with scammers maintaining well-crafted female profiles to demand money.
Of the two types, financial sextortion is the most common, with 80% of sextortion schemes globally being financially motivated.
Organized Financial Sextortion
The FBI has seen a significant increase in financial sextortion schemes linked to scammers in Africa, where well-coordinated, large-scale operations exploit the internet’s anonymity and reach to target tens of thousands of minors daily with a single scheme. These scammers often pose as potential romantic interests, building trust before threatening to expose compromising images or information unless their demands are met.
Organized sextortion involves two main ecosystems of threat actors: scam centers in Southeast Asia and scamming groups in Africa. Asian Scam centers operate centrally, producing two types of victims: scammers and targets. The first type includes the scam center employees, who are often tricked into thinking they are getting legitimate IT jobs. Once they start working, they realize they’ve been deceived and are trapped in these centers due to debt and other forms of coercion, essentially becoming victims of human trafficking. The second type of victims are the people targeted by these scammers. These individuals fall prey to various scams, such as fake IT help desk services, occasional romance scams, and sextortion.
Meanwhile, African scamming groups, primarily from Nigeria and Ivory Coast, operate in a decentralized manner on a much larger, global scale. These groups consist of scammers who join and participate of their own free will, scattered across different regions. They primarily focus on easy-to-execute online romance scams, flaunting the luxurious lifestyles their schemes afford them. Among the most prominent of these groups are the Yahoo Boys. These groups interact, share exploits, and even train newcomers, making them particularly challenging to detect and combat.
The Yahoo Boys were once associated with the infamous Nigerian Prince email scams of the early 2000s, which spread through the Yahoo email service. Today, it broadly labels a scattered community of cyber scammers engaged in various schemes, including sextortion. Two notable Yahoo Boys, brothers from Lagos, Nigeria, recently pleaded guilty to exploiting US-based minors in a sextortion ring—including Jordan DeMay. Both men, in their twenties, were arrested by local authorities and extradited to the United States earlier this year.
Addressing sextortion online requires a multifaceted approach, but specific challenges complicate the development of comprehensive solutions:
These challenges are further exacerbated by the rise of generative AI (GenAI), which further complicates detection and enforcement.
Previously, criminals had to build trust and persuade victims to provide intimate content. Now, advancements in AI allow them to bypass this step entirely, using powerful tools to create highly convincing deceptions on a massive scale. Scammers with little to no technical skills can now weaponize images of victims using AI to create sexual abuse imagery for free or for as little as $2 USD.
Text-to-image applications that generate nude deepfakes from fully clothed photos are now widely accessible and advertised on popular social media platforms.
Additionally, these tools allow perpetrators to create more convincing sextortion schemes with just a click – fabricating entire personalities and generating realistic chat responses. Feeding GenAI tools with simple prompts like “make me sound more like a 13-year-old” or to correct written text to overcome language barriers. These capabilities have made it easy for scammers to conduct these schemes and significantly complicated their detection and enforcement.
Sextortion is a complex global problem requiring equally complex solutions. It demands a combination of several layers, including user awareness, proactive intelligence, automated detection tools, industry knowledge sharing, and enforcement collaboration.
Users must be informed about sextortion risks. While media coverage has raised awareness, platforms need to educate young users and their parents on identifying suspicious accounts and behaviors. Users should know resources are available to assist them if they become victims, preventing harmful decisions like not reporting the crime, transferring money to extortionists, or self-harm.
Online platforms should use in-house or outsourced tools to flag and remove malicious content. These automated tools need to identify behavioral patterns used by sextortion scammers, like catfishing.
Collecting signals both on and off-platform is vital, including analyzing account characteristics, content, and user registration information to identify potential threats. Platforms should enhance warning and reporting features to prevent initial contact, warn of suspicious communication, and provide instructions and relevant information for potential victims.
ActiveFence’s detection solutions allow the mixing and matching of different content detectors and methods, including underage and nudity detectors, to identify, flag, and remove harmful content before it causes harm.
Sextortion is a cross-platform business, that necessitates collaboration between different online platforms to share signals related to identified sextortion schemes. An essential resource is the National Center for Missing & Exploited Children (NCMEC)’s CyberTipline, used by tech companies to report child abuse incidents. This database helps prevent future offenses by compiling valuable information to combat these crimes effectively.
Criminal networks involved in sextortion, especially the organized ones, often operate across multiple layers of the internet, including the clear and dark web. By systematically monitoring these networks, platforms can cross-reference user data against their activities on other platforms, enabling preemptive solutions. Robust threat intelligence tools can detect early signs of sextortion, too, such as the sharing of exploitative scripts, services, and solicitation for training. This proactive strategy will help identify potential threats and disrupt sextortion activities before they reach victims.
ActiveFence’s Threat Intelligence aids Trust & Safety teams by providing methods to identify perpetrators and insights into the platforms and technologies they frequently use. We monitor threat actors at their sources using the insights of an international team of linguistic and subject matter experts. This proactive approach helps gather signals and trends, preventing predators from exploiting social media platforms.
Partnerships with local and international law enforcement agencies are essential to hold cybercriminals accountable. Effective collaboration ensures that perpetrators are prosecuted, highlighting the importance of global cooperation due to the international nature of many sextortion rings.
This involves adopting an abuser’s mindset to identify platform weaknesses and policy loopholes. For example, T&S teams can create “honey-pot” profiles that mimic potential victims to attract scammers. Information gathered from these interactions, like profile names, pictures, and initial messages, can be used to uncover and disrupt broader sextortion networks, enhancing platform security and protection.
Foundation models and GenAI applications often inadvertently enable scammers to create material for sextortion. Unlike UGC platforms, where victims can flag abuse or report other users, GenAI systems lack user flagging mechanisms to detect these issues. Therefore, ensuring the integrity of these models is crucial to minimizing potential harm. This includes implementing safety evaluations, cleaning training data, filtering prompts and outputs, and conducting AI red-teaming.
All AI systems must prioritize safety throughout all stages of product design and deployment.
Financial sextortion, particularly targeting minors, presents a complex and growing threat. Effective combat requires a multi-dimensional, unified effort from multiple stakeholders, including user education, proactive intelligence, and strong collaboration across tech platforms and law enforcement. By integrating these strategies, we can better protect vulnerable populations and disrupt these malicious activities.
Editor’s Note: The article was originally published in September 2023. It has been updated with new information and edited for clarity.
Learn how to tackle financial sextortion head-on in our webinar -
Understand why image hash matching alone isn't enough to detect novel CSAM in the GenAI era, and how an AI-driven approach provides enhanced protection.
Discover the hidden dangers of non-graphic child sexual exploitation on UGC platforms and learn to combat these pervasive threats.
Companies with online communities can help improve child safety online through these 5 actionable strategies from ActiveFence.