Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
The metaverse is a new animal; unlike the digital platforms we’ve become accustomed to, it offers a new type of online experience that rallies around virtual community building and provides a sort of digitized iteration of reality. There are a lot of elements that platforms need to consider when creating digitally safe and secure online spaces in the metaverse. Recently, three ActiveFence experts weighed in on what some of those key considerations are. What you’ll find below is a transcript of their candid conversation about some of the more pressing issues regarding Trust & Safety in the metaverse.
Amit Dar, Senior Director of Corporate Development and Strategy: It has almost infinite definitions. It’s anything from Oculus VR goggles to even something as simple as AirPods, where you’re wearing a piece of technology and connecting to a digital world. For Trust & Safety teams, it’s any place that acts as a means to connect humans, whether it’s in a game or via a business experience, or even video chat. It uses new technologies that blend the online and offline worlds, but create new risks and harms.
Tomer Poran, VP Solution Strategy: The metaverse is more of a journey than a destination. It’s a transition in which we’re digitizing experiences that previously only existed in the physical world. That being said, it’s not yet fully formed, but it’s still really important to have policies in place. In the early days of Web 2.0, there was this mindset that since there weren’t so many people generating content on the internet, it wasn’t necessary to regulate it. The thought process was to grow it first, and then deal with safety later on. Cut to now, we seen platforms that are absolutely swarmed with harms. When we enter Web 3.0 and the metaverse, not having guardrails in place or safety by design in mind will lead us to repeat history.
Amit: With every new technology, there are unknown harms and new manifestations of harms that crop up, and it takes a while for the government to catch up with the speed and the reach of technologies. The fact that we haven’t yet reached critical mass means it’s the optimal time to expedite regulatory processes for the metaverse. Being proactive now means that by the time the metaverse is at critical mass, we’ll already have safeguards in place.
Tomer: There are real-world laws, and then in the online world, there’s what the industry calls ‘lawful but awful.’ These are things that are legal in the real world, but their effects online can be vast and dangerous. For example, it’s legal to say that Covid-19 is fake and that the vaccines are implanting chips in our brains. But when you take that message to social media, where sources can be falsified and fake accounts and unwitting users can amplify it, it has the potential to do a lot more harm.
Tomer: The key is transparency around what policies a platform has in place and how they’re being enforced. Platforms should invest in early trend detection, employ fact-checkers and do everything they can to minimize the spread of misinformation. It’s on watchdog groups, government agencies, the media and the public to make sure that platforms are making the best effort to enforce their policies.
Matar Haller PhD, VP Data: Turns out that even with things that we think are clear red lines, there is still a lot of room for interpretation. For example, is a video of a baby’s first bath CSAM? Does it depend on where it is shared? Or does it become CSAM based on the comments it receives? In the metaverse, this becomes even more complicated since there’s crossover between red lines and gray area, even when it comes to a seemingly clear-cut issue, like CSAM. With misinformation and disinformation, the situation is even murkier. Needless to say, from a data perspective, misinformation and disinformation represent a rapidly changing landscape even in Web 2.0. The metaverse is a living and breathing thing, so not only is the rate of change faster, but the manifestations of misinformation and disinformation are much richer.
Matar: It’s really all about balance. The metaverse gives you the ability to leave your ‘regular self’ aside if you want to. You can be as anonymous as you want to be, or as transparent as you want to be. This is good for individual privacy, but when it comes to moderation, it’s a challenge for platforms. It really comes down to the question of who needs to know who you are, and to what extent they need to know. Trust & Safety teams still need to be able to keep users safe while still offering that level of transparency and choice regarding privacy.
Tomer: The gaming space in the metaverse is pretty vulnerable to harm. You’ve got user-generated games – which aren’t new – but on a much wider scale. Users can upload not just games, but reenactments of real-life situations, like shootings. It’s this different, dangerous level of exposure. You’ve also got the issue of user-generated spaces that are being filled with malicious content that moderators can’t get into. It used to be that moderators who had access to a forum or a group through a link could get inside, see what was going on, and shut it down as necessary. Now, users can create their own spaces with invite lists, so even if you’ve got a link or an access code, you won’t be allowed inside unless you’re on the list. That means there are entire worlds where platforms can’t access or regulate, and that really poses a risk.
Matar: Unlike Web 2.0, which is more 2D, the metaverse is more 3D, which means there are vastly more ways to hide content. Nowadays, we can analyze videos and images, scanning for problematic aspects. We know what to look for and how, but since the metaverse is multi-layered, it’s more complex. For example, in a space in the metaverse, you can zoom in and zoom out to catch things in greater detail. You might zoom out of a space to see that the chairs are arranged in a swastika, or zoom in to see that the woodgrain of those chairs has swastikas on them, or turn the chairs over to reveal something more. Simply scanning, then, isn’t enough. It’s difficult to moderate, but it just means that platforms need to change their approach. The old methods won’t work in this new virtual world. Being proactive and knowing where things are coming from will help us know where to look.
–
As the metaverse continues to take shape, gain traction, popularity and users, and becomes ubiquitous with internet usage, Trust & Safety teams will need to take into account a variety of considerations to ensure safe and secure digital spaces for their users. This panel offers just a cursory overview of these. ActiveFence advocates a content moderation strategy that’s applicable to both Web 2.0 and its further iterations. Combining AI-powered content detection that’s informed by subject matter intelligence will give Trust & Safety teams the edge they need to prevent harms on their platforms.
Want to learn more about the threats facing your platform? Find out how new trends in misinformation, hate speech, terrorism, child abuse, and human exploitation are shaping the Trust & Safety industry this year, and what your platform can do to ensure online safety.
Learn 8 key insights from the Crimes Against Children Conference, where child safety experts discussed sextortion, the impact of generative AI, and more.
Read about the latest updates in ActiveOS and ActiveScore that improve granular PII detection and enhance protection against multiple threats.
Explore the alarming rise in online financial sextortion targeting minors - Discover the latest advanced detection methods, and strategies to combat this global threat.