Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
Safeguard your platform from potential abusers:
Pride events, which originated in 1970 after the Stonewall Riots, have unfortunately been marred by hate over the years.
Despite some progress, the same violence, discrimination, and the spread of disinformation targeting the LGBTQ+ community persists both in physical events and in the digital world.
Platforms of all sizes witness calls for violence, dissemination of hateful speech, and harmful content that affect users not only during Pride Month but also throughout the year. To keep users safe, Trust & Safety teams must identify and address the rising hostility surrounding Pride Month.
Inspired by Pride Month, and based on ActiveFence’s intelligence, which scans countless online resources—from the depths of the dark web to mainstream platforms—we have created a list of the most prevalent Anti-LGBTQ+ narratives during Pride Month 2024.
By proactively detecting these narratives on-platform, Trust & Safety teams can preemptively mitigate risks and stay ahead of emerging threats, ensuring a safer online environment for all.
The LGBTQ+ community faces persistent threats of violence from neo-Nazi groups, especially during Pride Month. This year, there has been an increase in content circulating on various user-generated content (UGC) platforms that encourages hate crimes and targets specific LGBTQ+ clubs and areas, further exacerbating fears and tensions within the community.
The neo-Nazi online community is particularly vocal, producing numerous blunt hashtags like “#Stop-trans-agenda” and “#Stop_lgbt_propaganda” on their dedicated platforms. They also use less obvious hashtags on more general platforms, such as “#Anti-Furry,” an online slur that dehumanizes non-binary individuals by labeling them as non-human creatures. Another example is #Stolzmonat, which translates to “Pride Month” in German. The Stolzmonat campaign is a German nationalist movement that opposes Pride parades and calls for harm against participants.
Right-wing extremists are exploiting generative AI to create and spread anti-LGBTQ+ memes, images, and slogans. These AI-generated materials can be mass-produced and distributed across multiple platforms, fueling discriminatory narratives, increasing hostility, and directly encouraging harm.
An especially troubling image circulating on far-right platforms shows a vehicle leaving black marks over a rainbow flag, styled to look like a video game. This image uses the hashtag #black_lines_matter, which is intentionally similar to the Black Lives Matter (BLM) slogan. While it depicts car enthusiasts creating tire burnout marks, the combination of this slogan with the specific imagery actually originates from several neo-Nazi groups, mainly in Eastern Europe. These groups use the image to encourage and legitimize vehicular attacks during Pride parades in June.
The rise in anti-trans violence has raised concerns within the trans community about being targeted with hate crimes. Online discussions have emerged, urging trans individuals to consider arming themselves for self-defense. This discourse highlights the tension between groups opposing transgender rights and those advocating for gun ownership as a means of protection.
The recent Nashville school shooting, allegedly carried out by a trans individual, has further intensified the discussion. Anti-LGBTQ+ movements accuse the government of withholding information to “protect” the shooter’s gender identity, fueling harmful narratives and deeper divisions.
A disturbing manifestation of this harmful discourse is its adoption by the pro-Jihadist community, which, like neo-Nazis, has a strong presence on social media and shares a similar affinity for AI-generated “art.” One prevalent image shows security camera footage from the Nashville school shooting, with the bullets in the shooter’s rifle colored in the trans flag colors. This meme, inspired by a popular online video game, is often accompanied by captions suggesting that this is the best way to “celebrate” Pride Month, merging anti-LGBTQ+ sentiment with the glorification of violence.
“Drag Story Hour” events, where drag performers read books to children in libraries, schools, and bookstores, have long been targeted by hate groups. Anti-LGBTQ+ social media posts claim these events groom children and encourage followers to report them to the police and leave negative reviews. Some users have even suggested that parents who take their children to these events should be arrested, perpetuating harmful stereotypes and misinformation.
Extremist online chatter has been glorifying Omar Mateen, the killer responsible for the Pulse nightclub shooting in 2016, one of the deadliest in American history. Pro-Jihadists and other extremist groups have adopted Mateen as a figurehead, calling for more violence against the LGBTQ+ community.
ActiveFence has identified numerous examples across multiple platforms where account names, hashtags, slogans, and memes utilize Omar Mateen’s name and image. This glorification of past atrocities not only highlights the ongoing threat of violence but also serves to incite further attacks.
World events often provide opportunities for threat actors to escalate harmful activities, particularly hate speech. For instance, during Pride Month and after major events like the US Supreme Court’s overturning of Roe v. Wade, we have witnessed increased efforts to spread hate speech.
All platforms face the challenge of combating hate speech, which impacts the safety and well-being of users and the business itself. Whether it’s anti-LGBTQ+ narratives or toxic content targeting any other community, it is crucial to swiftly remove such content.
Our AI-powered tools, ActiveOS and ActiveScore, are available in over 100 languages. Using these tools, teams can detect and take action against hate speech, regardless of the region, target audience, or content type. Additionally, ActiveFence’s deep threat intelligence offers proactive investigations into novel abuse tactics and narratives – helping Trust & Safety teams prepare for this type of abuse before it reaches their users. By using these tools and services, Trust & Safety teams can better protect their users and reduce the risk of real-world hate crimes on a large scale.
Trust & Safety teams play a critical role in stopping online narratives that can incite real-world harm to the LGBTQ+ community and other marginalized groups. However, given the multitude of platforms and the sheer volume of abusive content generated by threat actors, plain moderation falls short. During these critical times, Trust & Safety teams need robust intelligence to detect and mitigate harmful activities.
For specific on-platform findings related to these narratives or other harmful activities, our experts are available to provide tailored insights. By leveraging their expertise, you can stay ahead of bad actors and effectively prevent online harm from impacting your platform.
During Pride Month, ActiveFence has identified harmful narratives targeting the LGBTQ+ community. Read our findings to counter these threats.
Inspired by Pride Month, ActiveFence has identified eleven ways to ensure that people of all orientations and backgrounds can authentically and freely interact online.
ActiveFence provides a searchable interactive guide to the legislation of over 70 countries that govern online hate speech content.