Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
The podcasts industry is booming. With the volume and speed of podcast production increasing, Trust and Safety teams struggle to moderate podcasts for harmful content. Here, we identify the challenges and make suggestions for how to respond proactively and efficiently.
Over half of Americans listen to podcasts regularly and their popularity is only growing. With nearly two podcasts produced every minute, the demand is creating a new wave of content moderation, bringing challenges to Trust and Safety teams. From podcasts’ sheer volume to their open nature, along with the complexity of audio moderation alone, the Trust and Safety industry is finding itself in new waters. Unable to keep up, harmful content is passing through podcast systems.
The threats of user generated content within podcasts is increasing and even extreme. Within mainstream platforms are dangerous content such as terror, extremism, xenophobia and rampid disinformation. Some of this harmful content is directly stated while other harmful content goes undetected. These bad actors will use tactics that are even trickier to detect. ActiveFence identified some of these core tactics and found that abusers conceal content with opaque descriptions and hide messages in song.
Not only are users exposed to these dangers, but these threats harm the reputation of hosting companies as well. This podcast problem is drawing more attention and is increasingly represented in the media, causing concern for a backlash among listeners and leading to a decrease in revenue. According to an IAB study, 2020’s podcast ad revenues were up by 15% with 160 new advertisers each week buying ads for the first time. As brands do not not want to advertise on controversial shows, bad press can significantly impact the surge in ad revenue for podcasts.
As mentioned above, the exorbitant volume of podcasts is only growing. As of January 2021, over two million podcasts were registered by Google, with 28 million episodes in over 100 languages. Over 17,000 podcasts are produced weekly, making the speed of incoming content difficult to keep up with. Added to this, their reach is high and podcast audiences are projected to increase by 10% this year to 117.8 million.
With these challenges, it is not surprising that dangerous content is slipping through the cracks. In a Brookings analysis of 8,000 popular episodes of political podcasts, it was found that a tenth of podcasts contained potentially false information.
Currently, most Trust and Safety teams are not equipped to take a proactive approach on monitoring podcasts, utilizing reactive measures only. These measures rely on post-publication methods, such as user flagging. As most listeners of harmful podcasts do succeed in reaching their target audiences, listeners won’t be reporting a podcast for containing harmful content. This means that often, podcasts are only removed once they’ve received attention from the media, resulting in hosting platforms only reacting responsively, rather than proactively.
When it comes to the ethics of content moderation, Trust and Safety teams generally struggle with balancing liberties and safety, and laws and policy creation. However, the very nature of podcasts make these questions far more difficult.
Podcasts are distributed through RSS feeds which are links to a list of episodes. These feeds monitor sites for new content, essentially making podcast hosting platforms the equivalent of a search engine. This standardized open format opens up philosophical questions regarding censorship and freedom of speech and expression. Removing a podcast from a feed faces the same questions of making a specific webpage inaccessible on a search engine. This is a difficult question to answer and one that society itself is still grappling with.
While the ethics of moderating podcasts is grey, the law is not and presents a challenging environment for podcast platforms to meet regulations. According to the UK’s online safety bill, companies are required not just to take action against illegal content, but to seek out “harmful content” as well. Although this bill faces criticism for censoring legal speech, the legislation requires platforms to be proactive.
At their core, podcasts have an open nature. Beginning with RSS feeds, once a podcast has been produced, it appears immediately on podcast applications when searched. Furthermore, transmitters, licensees, and access to studios are no longer necessary to produce a podcast, creating an environment where anyone can be a publisher or broadcaster. However, on the flip side, while anyone can publish content, audiences cannot publish responses in the way that they can on traditional social media. When it comes to reporting harmful content, some podcast platforms have easily accessible mechanisms to report harmful content but many processes are indirect and difficult to access. Unlike Twitter, where audiences can comment and flag content easily, podcasts’ nature removes the ability of listeners to fact check.
Technology has not yet risen to the demands of audio moderation. While spoken words can be transcribed with natural language processing tools, this solution is far from perfect. Transcribing hours and hours of content is exorbitantly expensive and, in practice, unrealistic. Even if it were feasible to transcribe all of the content to text, AI tools are not advanced enough to correctly detect harmful content. False information would be missed within the large amounts of transcripts, while the nuances context would be ignored. Typically, nuance is an ongoing challenge when analyzing text. However, verbal human interactions are even more subtle. For instance, a threatening or sarcastic tone cannot be transcribed. This leaves audio open to many false positives.
It is clear that the challenge of moderating podcasts is not a simple one. From the nature of podcasts to platforms’ limited resources, existing solutions cannot match the scale of the problem. To effectively meet the challenge proactively while accommodating existing resources, an alternative solution is needed.
Focus can and should be shifted to sources of podcasts, rather than examining the content itself before identifying where a podcast originates. By understanding who publishes a podcast, episodes produced by questionable actors can be flagged, monitored and analyzed immediately before causing damage. This allows platforms to prioritize high-risk areas and allocate resources more effectively and efficiently.
With this proactive approach, hosting platforms save resources, mitigate threats to protect the public, and minimize the risk of poor media exposure.
Download our report to understand the abuse present on audio streaming platforms and how to counter them.
Learn 8 key insights from the Crimes Against Children Conference, where child safety experts discussed sextortion, the impact of generative AI, and more.
Read about the latest updates in ActiveOS and ActiveScore that improve granular PII detection and enhance protection against multiple threats.
Explore the alarming rise in online financial sextortion targeting minors - Discover the latest advanced detection methods, and strategies to combat this global threat.