Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
As part of our work protecting users online, ActiveFence’s Child Safety Researcher, Rafael Javier Hernández Sánchez, led a deep analysis and on-platform research of the threats to children from predators in the VR world.
Virtual reality (VR) games and platforms appeal to minors of all ages, and they have become attractive targets for predators seeking children to groom and abuse. As discussed in Trust & Safety in the Metaverse, VR is in its infancy. As a result, Trust & Safety teams do not yet fully understand the risks users face and lack the best risk mitigation approaches to counter them. In this article, we review some of the key findings that provide essential insights for this new arena of Trust & Safety.
Virtual reality spaces mimic the physical world.
However, like other online platforms, user activity is characterized by reduced behavioral inhibitions. This usage is particularly dangerous, as VR is immensely popular with young users, with significant numbers of under-18s.
The features of VR platforms that provide a simulated world, together with young users acting naively, raise the risk of predators taking advantage and inappropriately engaging with minors who lack the safeguards in their physical surroundings.
Trust & Safety teams tasked with maintaining child safety in VR face unique behavioral and technological challenges.
While VR platforms and hardware vendors meet the minimum age requirements of the tech industry, allowing only users 13 years or older to use their services – children abound. In one VR platform that allows users to create virtual worlds, we found that despite the platform’s 18+ age limit, underage users were found in every virtual world. Our research and evaluation of user voices and general behavior in VR lobbies suggest a high prevalence of accounts operated by users aged ten or younger. These children easily bypass the age requirements in place by lying about their age or using pre-approved headsets of parents or older siblings.
Parents are often part of the problem. Not understanding the present risks, there is a trend of using VR technology as a babysitter. We have identified many online posts where parents request au pairs to spend time with their children inside VR worlds after returning home from school.
Minors join VR lobbies without any oversight. Lacking caution, they run around, talk loudly, and approach adult users. Some adults leave, annoyed by the interaction, while others engage the minors in conversation.
Our investigations found that minors frequently share personal details within earshot of other players. They reveal their names, location, and other personal identifiable information.
Child-in-VR behavior is no different than that seen in 2D gaming platforms. However, the framework enhances this risky behavior, as it is easier to reveal personal details when speaking “in person.” This trusting disposition leaves young users open to abuse.
In VR, child predators are already emboldened by the operative freedom afforded by the anonymity of online interactions. They take advantage of new ways to reach and build relationships with minors.
Some predators carry out child sexual exploitation activities by listening out for the voices of minors. They encroach on their virtual personal space and “touch” and grope the avatars of these children.
Others use this access as another method to groom minors. Our research identified predators seeking to convince children to participate in simulated sexual acts or erotic role play (ERP) to build sexual trust with minors. Predators leverage this trust to request off-platform communications where they request or demand sexually explicit self-produced recordings of that child. Child predators carry out this behavior for sexual gratification, while scammers work to conduct sextortion against these minors.
The relative privacy afforded by the physical dimensions of VR spaces facilitates these inappropriate interactions. Threat actors “hide” their attempts to groom children, even in public lobbies by approaching minors out of view from others and, then taking them to hidden spaces. In addition, they exploit those public worlds, such as virtual nightclubs or theaters, with “private” rooms that can be accessed when empty and locked.
In addition to luring minors into private spaces, other predators bring vulnerable users into fully private worlds. The risks are reminiscent of physical world predator-child encounters, where adults offer minors rewards in exchange for accompanying them. There have been reported cases of groomers offering minors real money in exchange for entering their private VR worlds.
A broader concern is that VR can provide a pathway for off-platform child sexual exploitation. As mentioned above, minors have been groomed by adults in VR and coaxed into sending real-world pictures of themselves to their abusers.
There have already been cases of child predators who met children in VR and built exploitative relationships with them over time. Eventually, they visited the minors in person and sexually abused them. In one instance in 2020, a 36-year-old man met a 15-year-old girl on a VR platform. He groomed her and crossed state lines from Louisiana to Florida to meet her. He proceeded to live in her bedroom, unknown to her parents, and sexually abused her for four weeks before his arrest.
Our research into dark web child predator communities uncovered similar activity. We detected adult users describing how they were currently using VR platforms to speak and spend extended periods with underage boys. These men expected to gain a physical relationship with these minors.
VR is not only a technological revolution; it requires a paradigm shift for Trust & Safety. This shift is required because users go into very real places carefully created to imitate reality when they put on a VR headset. One with many of the physical limitations of reality.
Unlike threat actors engaged in 2D online spaces, where exchanges are textual and archived on the platforms or search engine records, children can find themselves locked in rooms with adults, with no one able to hear or see them. Child predators in VR leave no traceable footprint, as the suspicious interactions are fleeting. An inappropriate spoken comment, question, or invitation leaves no digital evidence that traditional OSINT mechanisms can detect.
If Trust & Safety teams are to secure VR for minors, education is essential.
To support these efforts, Trust & Safety operations must use off-platform intelligence to monitor communities of threat actors for mentions of VR in their discussions of illicit activities. This is crucial to identify VR abuses, providing actionable insights and greater visibility of abusive trends in their activities.
For a sample view of ActiveFence’s work in VR, access our exclusive research into VR exploitation for child abuse, hate speech, and terrorist group promotion.
Learn 8 key insights from the Crimes Against Children Conference, where child safety experts discussed sextortion, the impact of generative AI, and more.
Read about the latest updates in ActiveOS and ActiveScore that improve granular PII detection and enhance protection against multiple threats.
Explore the alarming rise in online financial sextortion targeting minors - Discover the latest advanced detection methods, and strategies to combat this global threat.