Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
The internet is being flooded with inauthentic traffic that spreads online harm. This flood is actually enabled by bot accounts, which are used to efficiently spread misinformation, child sexual abuse material (CSAM), and terrorist propaganda. These same bots can also be used to coordinate activities to defraud tech companies engaged in advertising. At the forefront of this fight, ActiveFence works to identify coordinated, inauthentic, and harmful activity on platforms. At the same time, our CTI team monitors Underground marketplaces to locate the threat actors behind these dangerous behaviors.
Before understanding how to counter bots, we must first understand what bots are.
Bots in and of themselves are not necessarily malicious: they are automated pieces of software designed to perform a pre-programmed activity in a manner that imitates humans. Many companies use bots for customer communications, to detect and check for copyright infringements, to locate the lowest prices, or analyze a website’s content to improve SEO ranking. However, when we in Trust & Safety talk about bots, we mostly deal with malicious bots, which generally fall under two categories:
In the next sections, we will show how these bots are used to cause harm online.
Disinformation isn’t new – it has existed long before bots have been around, in fact – long before the internet was invented. While information operations aren’t new, bots now allow disinformation and misinformation to spread quickly, sowing distrust and harming democratic processes.
By tapping into pre-existing interest groups with similar beliefs and interests, disinformation agents can utilize bots to spread false narratives like a virus. The false information infects one user who reshares the false content, which spreads throughout the whole system.
The use of bots for spreading misinformation and disinformation is well documented. Emilio Ferrara, a research professor in data science at the University of Southern California, found that threat actors had engaged 400,000 bots in the political conversation around the US 2016 Presidential election. This subset of bots was responsible for around one-Âfifth of all related social media postings. Outside domestic politics, disinformation via bots has become essential in war. In the context of the Russia-Ukraine war, Ukrainian cyber police have found and taken action against many domestic pro-Kremlin bot farms. One operation, dubbed Botofarm, saw 100,000 SIM cards seized and 28 online mobile telephone registration platforms blocked. These bots shared pro-Russian disinformation and propaganda about the ongoing war to weaken Ukrainian morale.
To combat this activity, ActiveFence’s information operations intelligence teams collect signifiers of inauthentic activity. These signals reveal specific accounts on our partner’s platforms that require review. Mapping the metadata of these accounts reveals repeated identifiable information. This data identifies networks of similar bot accounts and those accounts of real individuals and enables Trust & Safety teams to remove an entire disinformation network in one operation.
Inauthentic account detection via metadata analysis
In addition to spreading misinformation, bots are used by CSAM vendors to promote and sell their illegal content on major social media platforms. To achieve broad engagement, these threat actors simultaneously generate large batches of bot accounts to share explicit CSAM image and video content tagged with specific relevant hashtags. Similarly, terror organizations such as ISIS and al-Qaeda utilize bots to amplify their network resiliency. These bots publicize new terrorist domains to supporters and share new content produced by the central terror organization.
In both CSAM and terror content distribution, the use of bots allows operators to use scale to their advantage while also masking their own identity. This way, if one or several bot accounts are identified and blocked – there is still a chance that others will go undetected, allowing the content to continue spreading.
In ActiveFence’s work countering online terrorist activity, we see that the creation of bots spikes in the days immediately following the release of a new piece of terrorist video content. This characteristic activity is particularly relevant for the ISIS terrorist organization. By focusing on days when bot activity is most likely to take place, our partners can gauge whether an increase in account activity is organic or due to terror-fueled bot activity.
Bots also have many fraudulent applications. In sophisticated phishing campaigns, bots are used to promote a high volume of advertisements for fraudulent offerings, sharing links on domains that offer special deals for items such as Web 3.0 assets. They also manipulate legitimate users to reshare the content, enabling them to convince susceptible users to trust the promoted websites. These users then access the websites and attempt to make a purchase, providing fraudsters with their personal and financial account information.
Another fraud method utilizing bots involves simulating authentic user activity to receive advertising revenue illegally. Using click bots and download bots to interact with content, fraudsters can inflate the view counts and impressions on content or leave inauthentic comments and likes to draw greater attention to a digital asset. Mobile fraud actors run these bots on servers with emulators of various mobile devices and operating systems. From these emulators, bots can download apps and perform an in-app activity to click on ads and other monetized actions.
While the harm generated by bots is clear, the solution to this problem is far from obvious. Recent attempts to take action – either reactively or proactively on bot networks have met significant challenges.
The reactive approach adopted by many platforms is known as IP blacklisting. This process denies access to server-based bots that use a flagged IP address. However, while this activity challenges threat actors, it doesn’t stop them entirely. Threat actors often react by finding unique ways to circumvent identification by switching their servers’ IP addresses and returning to attack the platform and its users anew.
Detecting bot activity has also become more difficult due to growing user sophistication, as AI programs can authentically emulate human-generated text quickly. Now, a bot network operator can modify a single piece of text into many distinct posts that avoid automated detection. In the same way, networked bot activity is staggered so that simultaneous mass actions do not trigger the abused platform’s safeguarding mechanisms.
Proactive attempts also face significant challenges. In one example from December 2022, Twitter identified that bot networks often exploit the services of East Asian mobile telephone carriers. To tackle the problem, the company denied these East Asian carriers access to the platform, effectively creating another problem. While the act did manage to stop the bot networks, it also wound up denying access to authentic users who had enabled 2-factor authentication (2FA).
Subtler approaches to combating bots are therefore needed.
Bots’ behaviors are based on their specific function. They are used to conduct scams on dating apps, manipulate traffic on social media platforms, and manipulate rankings on online marketplaces. Each activity will have a different signature. As bot operators have enhanced their concealment activities, Trust & Safety teams must engage more resources in intelligence to identify these inauthentic bot accounts.
As an example, key identifiers of bot accounts that are engaged in the promotion of information operations and the promotion of child sexual abuse material and terrorist propaganda include suspicious and repetitive metadata:
Evaluation of these five criteria can allow accounts to be risk scored for inauthenticity.
The identification methods shared above are important, but access to the threat actor communities that the bots serve and subject matter expertise in the specific threat is critical. By tapping into these communities, teams can understand the tactics used by each operation, allowing them to easily find its on-platform entities, whether by tracing the content it shares or by identifying one or more involved accounts. Once these are mapped, their metadata can be used to find additional entities related to the operation and take organized, rather than targeted action against them.
While many threat actors create bots by running scripts to share content, the more sophisticated bots typically relate to acts of advertising fraud and involve ‘click bots’ and ‘download bots.’ These bots are usually created by specialist vendors and sold in underground markets.
By accessing these marketplaces, Trust & Safety teams can gain access to the sold accounts and collect intelligence that allows them to take direct action to stop bot activity. The mapping of digital signals of acquired bot accounts, can help teams identify similar accounts and actions. Additional insights can be gleaned from this collection, including:
ActiveFence works to provide holistic coverage for Trust & Safety teams to ensure online platform integrity. Our systems and intelligence experts detect and carry out deep threat intelligence and network analysis to locate entities engaged in a wide range of threats, including child abuse, disinformation, terrorism, and cyber threats. With access to sources of online harm on the clear, deep, and dark web and linguistic capabilities in over 100 languages, we offer agile threat intelligence coverage to locate inauthentic activity on your platform, enabling our partners to effectively moderate harmful content and fake and bot accounts.
Want to learn more?
Learn 8 key insights from the Crimes Against Children Conference, where child safety experts discussed sextortion, the impact of generative AI, and more.
Read about the latest updates in ActiveOS and ActiveScore that improve granular PII detection and enhance protection against multiple threats.
Explore the alarming rise in online financial sextortion targeting minors - Discover the latest advanced detection methods, and strategies to combat this global threat.