The Podcast Moderation Challenge

By
November 18, 2021
Podcasting with microphone and headphones

The podcasts industry is booming. With the volume and speed of podcast production increasing, Trust and Safety teams struggle to moderate podcasts for harmful content. Here, we identify the challenges and make suggestions for how to respond proactively and efficiently. 

megaphones

Over half of Americans listen to podcasts regularly and their popularity is only growing. With nearly two podcasts produced every minute, the demand is creating a new wave of content moderation, bringing challenges to Trust and Safety teams. From podcasts’  sheer volume to their open nature, along with the complexity of audio moderation alone, the Trust and Safety industry is finding itself in new waters. Unable to keep up, harmful content is passing through podcast systems.

Why you should care

The threats of user generated content within podcasts is increasing and even extreme. Within mainstream platforms are dangerous content such as terror, extremism, xenophobia and rampid disinformation. Some of this harmful content is directly stated while other harmful content goes undetected. These bad actors will use tactics that are even trickier to detect. ActiveFence identified some of these core tactics and found that abusers conceal content with opaque descriptions and hide messages in song. 

Not only are users exposed to these dangers, but these threats harm the reputation of hosting companies as well. This podcast problem is drawing more attention and is increasingly represented in the media, causing concern for a backlash among listeners and leading to a decrease in revenue. According to an IAB study, 2020’s podcast ad revenues were up by 15% with 160 new advertisers each week buying ads for the first time. As brands do not not want to advertise on controversial shows, bad press can significantly impact the surge in ad revenue for podcasts. 

The scale and struggle to respond proactively

As mentioned above, the exorbitant volume of podcasts is only growing. As of January 2021, over two million podcasts were registered by Google, with 28 million episodes in over 100 languages. Over 17,000 podcasts are produced weekly, making the speed of incoming content difficult to keep up with. Added to this, their reach is high and podcast audiences are projected to increase by 10% this year to 117.8 million. 

With these challenges, it is not surprising that dangerous content is slipping through the cracks. In a Brookings analysis of 8,000 popular episodes of political podcasts, it was found that a tenth of podcasts contained potentially false information. 

Currently, most Trust and Safety teams are not equipped to take a proactive approach on monitoring podcasts, utilizing reactive measures only. These measures rely on post-publication methods, such as user flagging. As most listeners of harmful podcasts do succeed in reaching their target audiences, listeners won’t be reporting a podcast for containing harmful content. This means that often, podcasts are only removed once they’ve received attention from the media, resulting in hosting platforms only reacting responsively, rather than proactively.

Stylized microphone icon with colorful sound waves on a blue and black gradient background.

Law and the philosophy of podcast moderation

When it comes to the ethics of content moderation, Trust and Safety teams generally struggle with balancing liberties and safety, and laws and policy creation. However, the very nature of podcasts make these questions far more difficult. 

Podcasts are distributed through RSS feeds which are links to a list of episodes. These feeds monitor sites for new content, essentially making podcast hosting platforms the equivalent of a search engine. This standardized open format opens up philosophical questions regarding censorship and freedom of speech and expression. Removing a podcast from a feed faces the same questions of making a specific webpage inaccessible on a search engine. This is a difficult question to answer and one that society itself is still grappling with.

While the ethics of moderating podcasts is grey, the law is not and presents a challenging environment for podcast platforms to meet regulations. According to the UK’s online safety bill, companies are required not just to take action against illegal content, but to seek out “harmful content” as well. Although this bill faces criticism for censoring legal speech, the legislation requires platforms to be proactive. 

Open by nature

At their core, podcasts have an open nature. Beginning with RSS feeds, once a podcast has been produced, it appears immediately on podcast applications when searched. Furthermore, transmitters, licensees, and access to studios are no longer necessary to produce a podcast, creating an environment where anyone can be a publisher or broadcaster.  However, on the flip side, while anyone can publish content, audiences cannot publish responses in the way that they can on traditional social media. When it comes to reporting harmful content, some podcast platforms have easily accessible mechanisms to report harmful content but many processes are indirect and difficult to access. Unlike Twitter, where audiences can comment and flag content easily, podcasts’ nature removes the ability of listeners to fact check. 

The limits of technology

Technology has not yet risen to the demands of audio moderation. While spoken words can be transcribed with natural language processing tools, this solution is far from perfect. Transcribing hours and hours of content is exorbitantly expensive and, in practice, unrealistic. Even if it were feasible to transcribe all of the content to text, AI tools are not advanced enough to correctly detect harmful content. False information would be missed within the large amounts of transcripts, while the nuances context would be ignored. Typically, nuance is an ongoing challenge when analyzing text. However, verbal human interactions are even more subtle. For instance, a threatening or sarcastic tone cannot be transcribed. This leaves audio open to many false positives. 

Identifying podcast publishers at their source

It is clear that the challenge of moderating podcasts is not a simple one. From the nature of podcasts to platforms’ limited resources, existing solutions cannot match the scale of the problem. To effectively meet the challenge proactively while accommodating existing resources, an alternative solution is needed. 

Focus can and should be shifted to sources of podcasts, rather than examining the content itself before identifying where a podcast originates. By understanding who publishes a podcast, episodes produced by questionable actors can be flagged, monitored and analyzed immediately before causing damage. This allows platforms to prioritize high-risk areas and allocate resources more effectively and efficiently. 

With this proactive approach, hosting platforms save resources, mitigate threats to protect the public, and minimize the risk of poor media exposure. 

Download our report to understand the abuse present on audio streaming platforms and how to counter them.

Table of Contents