The Content Moderation Conundrum With Influential Users

By
November 2, 2022
Group of people using smartphones with various social media reaction icons like hearts, likes, and emojis floating above.

It goes without saying that all content moderation decisions should be fair and equal, but sometimes these decisions are much trickier, especially when the users and posts in question have massive followings and influence. While one post might be a blip on the radar, moderating another might make global news. In order to ensure that all users, no matter their relative power, receive fair treatment, Trust & Safety teams need to take into account various considerations to keep their platforms trustworthy.

Powerful, influential users are generally good for business. Using their influence to share mostly positive (or at least not violative) content, these individuals create communities and accrue user engagement, ultimately driving high revenues for both themselves and the platforms they use. However, it is when these accounts begin participating in violative activity that Trust & Safety teams are faced with more significant dilemmas than usual.

Recently, Ye (formerly known as Kanye West) had several posts removed and his Instagram and Twitter accounts temporarily restricted for violating the content policies on both platforms. While not the first time Ye has lashed out on these platforms – not even the first time this year, in fact – this is a slightly softer reaction to content violations than what former president Donald Trump experienced, having been banned from Twitter following the January 6 insurrection.

How Platforms Moderate Influential Users

Typically, the enforcement of content policy takes a number of different forms, ranging from adding warning labels to denote sensitive content to posts to removing them, suspending users, or banning them outright. Some platforms even add a layer of context or informative label to seemingly controversial posts that might not otherwise violate a policy. In the cases of Ye and Trump, platforms took the typical approach of post removal, account suspension, and, in Trump’s case, an indefinite ban.

It’s the goal and responsibility of Trust & Safety teams to keep platforms safe and secure, and sometimes that means needing to hold users accountable for their actions. While this seems like a pretty mundane issue – bad behavior has consequences – it becomes more complex when those committing the violations have massive followings. It’s the responsibility and goal of Trust & Safety teams to keep user safe, but that mission gets challenged sometimes, like in the cases of Ye and Trump.

It’s no secret that power helps you get away with things offline, and online, it appears that sometimes holds true, too. Back in 2019, one outlet published a story on the conundrum of moderating high-profile users like those mentioned above and found that Meta moderated its users differently depending on their level of influence. Rules were relaxed when it came to powerful users, and certain personalities were effectively immune to them. The company was privately holding influential users to different standards. In practice, this isn’t so uncommon: Twitter, in their explanation about Trump’s ban from the platform, stated that it was the context of his tweets and the influence he had that resulted in his removal, not necessarily the use of explicitly violative language. This indicates that had it been a user with a small following sending out the same tweets, that person might still be on the platform today: it wasn’t just the content that was problematic in Trump’s case, it was his level of influence.

What’s At Stake With Moderation?

Powerful accounts represent high user engagement and business opportunities for platforms and posters alike. The problem is threefold: platforms want to keep users engaged as much as possible, and moderating accounts or posts with large engagement puts this at risk by creating the possibility for on-platform backlash, fomenting bad press, and even resulting in a loss of revenue. It’s no secret that high user engagement means financial gain for platforms, so one of their chief interests will be keeping people online. But moderating can anger users and cause them to troll or spam Trust & Safety teams, to leave a platform en masse, or to make negative noise off-platform. That being said, choosing not to moderate high-profile users can have the same effect: showing that different users are treated unequally won’t do anything to boost a platform’s credibility. Damned if you do, damned if you don’t, as the saying goes.

The question here isn’t about what policies a platform should have, but if they should be applied evenly to all users, regardless of status. The short answer is yes, but it’s not an answer without its complications.

The Solution: Transparency and Balance

Trust & Safety teams are tasked with the difficult balance of ensuring free and open engagement in online communities with guaranteeing user safety. So from the outset, they need to have adequate transparency.

The rationale behind content moderation decisions should be clear to users so that when issues arise, there’s well-established clarity about how they’re handled. That means laying out the process for each level of moderation, whether it’s a first-strike system that’ll ban someone outright upon a violation, or if the go-to method is to add warning labels to potentially violative posts while leaving them up. When it comes to especially tough decisions in high-stakes situations, teams should have internal processes to ensure correct choices are being made, bringing different opinions to the table to guarantee biases are free from the final say. Being upfront about moderation mistakes if and when they happen is also a good practice for Trust & Safety teams to implement. Those moments present learning opportunities not only for the platforms but for other companies in the industry as well.

Trust & Safety doesn’t exist in a vacuum, and powerful users are on every site and app out there: policies can and should evolve given the changing world we live in. Across the board, Trust & Safety teams need to be in agreement that policy is policy, regardless of a user’s power on a platform or off it. Situations will always arise that challenge this notion, as we’ve seen recently. But encouraging moderators to treat all users the same is the key to maintaining trustworthiness and transparency. Just as laws in the ‘real world’ are meant to be applied equally, so too, should the policies on platforms. 

Table of Contents

Looking for better ways to moderate influential users? Explore our comprehensive content moderation solutions.

Learn More