New Webinar! Safeguarding Children in the GenAI Era Watch On-demand
Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
Gaming is the new social media, Games are gradually creating alternative realities where people spend a significant part of their lives. Disinformation in games can be even more dangerous than on traditional social media because it can feel more real, blurring the line between fiction and reality.
Read More
Malicious actors are abusing generative AI music tools to create homophobic, racist, and propagandistic songs — and publishing guides instructing others how to do so as well.
The campaign to pin the latest incident on Ukraine and divert attention from Russia’s security failings targeted not just a domestic audience. According to research by ActiveFence, which has found tens of thousands of newly launched accounts, these accounts have been publishing posts that support the Russian narrative of Ukrainian and Western complicity in at least seven languages, including Arabic.
As the Ukraine war grinds on, the Kremlin has created increasingly complex fabrications online to discredit Ukraine’s leader and undercut aid. Some have a Hollywood-style plot twist.
Where are investors putting their money in 2024? A recent Crunchbase article points to a new focus: fighting disinformation, and highlights 16 promising startups in the space.
Four leading gaming Trust and Safety companies are banding together to form The Gaming Safety Coalition. This strategic alliance between Modulate, Keywords Studios, ActiveFence and Take This represents the companies’ shared commitment to improving player and moderator well-being.
Online platforms face unavoidable responsibilities for Trust and Safety, particularly in maintaining election integrity by combating disinformation and other dangers. AI further complicates these challenges, not only through the creation of deepfakes but also by empowering more malicious entities.
Amidst the conflict between Hamas and Israel, a disturbing surge in antisemitic and Islamophobic hate speech has swept across social media platforms. Extremist influences, fueled by the ongoing conflict between Israel and Gaza, have played a significant role in exacerbating this alarming rise in hate speech online.
The shift away from in-house trust and safety teams has created an opportunity for consultancies and startups to introduce something novel: trust and safety as a service.
When the militant group Hamas launched a devastating surprise attack on Israel on Oct. 7, some fighters breached the country’s defenses in motorized paragliders. In the following days, photos and illustrations of Hamas forces coasting by wing became highly charged, controversial symbols: an emblem of Palestinian resistance to some, a glorification of terrorism to others.
The startup ActiveFence, a trust and safety provider for online platforms, is one company sounding the alarm about how predators are abusing generative AI, and helping others in the tech industry navigate the risks posed by these models.
TikTok became the world’s window into the conflict in Israel. Clips from a music festival in southern Israel, where 260 attendees were killed and more taken hostage according to Israel rescue agency Zaka, broke through the algorithm’s regularly scheduled lighthearted programming. For the most part,Noam Schwartz thinks TikTok has played a positive role in the conflict. “People would not believe the magnitude of this event without it being amplified in social media,” he said.
ActiveFence, one of the bigger startups building tech for trust and safety teams, has acquired Spectrum Labs, another key startup in the space building AI tools to track online toxicity.
Russian propaganda is spreading into the world’s video games. Propaganda is appearing in Minecraft and other popular games and discussion groups as the Kremlin tries to win over new audiences.
The revolution in artificial intelligence has sparked an explosion of disturbingly lifelike images showing child sexual exploitation, fueling concerns among child-safety investigators that they will undermine efforts to find victims and combat real-world abuse.
Child safety experts are growing increasingly powerless to stop thousands of "AI-generated child sex images" from being easily and rapidly created, then shared across dark web pedophile forums. This explosion of disturbingly realistic images could normalize child sexual exploitation, lure more children into harm's way, and make it harder for law enforcement to find actual children being harmed.
Child predators are exploiting generative artificial intelligence technologies to share fake child sexual abuse material online and to trade tips on how to avoid detection, according to warnings from the National Center for Missing and Exploited Children and information seen by Bloomberg News.
Noam Schwartz provides key strategies for the US government to counter Russian disinformation campaigns targeting Ukraine. By implementing a comprehensive approach, the US can effectively combat the spread of false narratives. This article offers valuable insights and recommendations for policymakers and those invested in countering disinformation.
Live content moderation is a well-known challenge to Trust & Safety teams. Read how combining AI and human expertise can be the solution.
Seeing false and toxic information as a potentially expensive liability, companies in and outside the tech industry are angling to hire people who can keep it in check, ActiveFence being one of them.
Online platforms and their users are susceptible to a barrage of threats – from disinformation to extremism to terror. Daniel and Chris chat with Matar Haller, who is using a combination of AI technology and leading subject matter experts to provide Trust & Safety teams with tools to protect users and ensure safe online experiences.
Disinformation has long been a feature of politics. Yet wading through the muck ahead of this year’s midterm elections in one fiercely contested state, Pennsylvania, shows just how thoroughly it now warps the American democratic process.
Federal officials are warning that China is working to interfere in November's midterm elections. Rachael Levy, Director of Geopolitical Risk at ActiveFence, joined CBS News to discuss the Communist Party's tactics in attempting to influence U.S. politics.
On top of widespread disinformation around election fraud, ActiveFence has detected online discourse promoting military intervention and suggesting the military should play a more active role in the electoral process.
In this episode of Reckoning, Kathryn Kosmides speaks with Noam Schwartz about the history of trust and safety on the internet, why companies are investing millions of dollars into Trust & Safety, and proactive vs. reactive online harm prevention.
Dennis Kahn, research lead at ActiveFence, talks about extremist online content in Brazil, saying he is most concerned about calls for military intervention and a violent coup in favor of Bolsonaro, threats that have appeared on Telegram, Gettr, and local platform PatriaBook.
Amit Dar, senior director of strategy at ActiveFence, adds to the conversation about the vulnerabilities of cross-chain bridges.
Inbal Goldberger, ActiveFence VP of Trust & Safety, shares how scaled detection of online abuse can reach near-perfect precision by combining the power of innovative technology, off-platform intelligence collection, and the prowess of subject-matter experts.
Metaverse and Web3 have become terms that describe aspects of the future internet, these technologies are building immersive worlds that intersect digital and real life. As more people migrate to the metaverse, real-world complications are bound to arise.
Armed demonstrators and extremist groups have increasingly gathered at abortion-related protests in the aftermath of the Supreme Court’s overturning of Roe v. Wade, causing analysts to warn of a rising threat of violence.
An interview with CEO and Co-Founder Noam Schwartz on the importance of proactive content detection in preventing online harm.
Today’s guest is Noam Schwartz, the CEO and Co-Founder of ActiveFence, which raised $100M for the software that helps keep the internet safe.
We often hear and read about digital security, but digital safety concerns have also become a key issue for online platforms, creating a need for services and tools to address online integrity.
In the early stages of the internet, moderators of small platforms may have been able to hire a few people to ensure the content users were sharing was both truthful and non-violent. Today, there’s so much information being shared every second, the field of content moderation requires constant innovation to keep up and continue doing its job.
You might want to change all your passwords after reading this.
Online abuse, disinformation, fraud and other malicious content are growing and getting more complex to track. Today, a startup called ActiveFence is coming out of the shadows to announce significant funding on the back of a surge of large organizations using its services.
"Even if all European QAnons support the standard narrative, that is to say they support Trump and far-right ideas, each group adapts these messages to local circumstances," said the director of strategy at the Israeli cybersecurity company ActiveFence, Nitzan Tamari.
The “metaverse” is no longer a far-off concept in Sci-Fi novels. With this new reality, here are four evolving areas to watch as online platforms grapple with new and growing abuse vectors and the new phase of accountability.