Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Here's what you need to know.
Transparency reports share the details of online platforms’ content moderation and data-sharing activities. Offering many benefits to online platforms, transparency reporting is becoming more common for both small and large online services.
These reports offer platforms the opportunity to gain the trust of their users by being fully open and disclosing their activities relating to user privacy, freedom of expression, and safety – the things users care about most. However, with no standardization, there is a lack of clarity around what a transparency report should look like. Furthermore, whether it’s worthwhile to allocate resources to this costly and timely venture remains a question to many.
Here, we explain what a transparency report is, the laws that require publishing the reports, and how companies can begin the reporting process.
For a comprehensive review of transparency reports’ legal requirements, industry best practices, and a free customizable reporting template, access the Guide to Transparency Reports.
The history of transparency reports dates back to 2010, when Google began to disclose information regarding digital governance and enforcement measures. Google essentially wanted “to provide hard evidence of how laws and policies affect access to information online.” In the following years, Twitter (2012), Microsoft (2013), Apple (2013), Yahoo (2013), and Facebook (2013) all published their first transparency reports. At the time, the reports only shared data surrounding government requests for user information and content taken down. However, since then, they’ve expanded to include a company’s internal content moderation metrics. Etsy published the first release of a policy enforcement report in 2015. In 2018, the Santa Clara Principles promoted the adoption of industry content moderation reporting and set guidelines for transparency and accountability.
Today, transparency reports are far more common. According to Access Now, a digital rights advocacy organization, 88 companies released transparency reports in 2021. With new regulations coming into play and increasingly critical public, this number is expected to increase.
As mentioned, transparency reports vary in the information they share, though all connect back to transparency about company activities. We define transparency reports in our Trust & Safety Glossary, as:
“Voluntary reports that provide transparency into the ways that data is handled and moderation decisions are made. This report communicates key metrics that can include: the volume, type and region of detected content, how that content was detected, what actions were taken, the volume of and reactions to appeals, and changes over time – among other metrics.”
Initially, as mentioned above, transparency reports shared only law enforcement requests for customer data and demands to remove content. This answered the public concern over data privacy and government interference at the time, as it still does today. Following the growing trend of transparency reports, companies began to share their content and platform enforcement measures in addition to government-related data. Often, the content of a transparency report sheds a positive light on a company. It showcases how platforms care about their users by sharing the actions taken to protect privacy, enforcing a policy that protects users, and listening to user appeals.
Transparency reports aren’t only for the purpose of positive public perception. They are also beginning to be required by regulators worldwide. The recent passing of the Digital Services Act, which requires transparency reports even for smaller online services, signals an increase in regulations that are likely to include similar obligations.
To learn more about the DSA’s requirements, download our guide here.
A few countries already require this. For example, Germany passed the NetzDG Network Enforcement Law in 2017, which requires social media companies to publish how government removal requests are processed bi-annually. TERREG, the EU’s regulation on terrorist content which came into effect in June 2022, requires platforms to produce annual transparency reports to demonstrate compliance.
When publishing reports, companies should be aware of the following challenges.
The challenge: Publishing transparency reports requires Trust & Safety teams to implement the processes of tracking content moderation efforts, which can be expensive and time-consuming. Additional resources needed include the design, launch, maintenance, and expansion of reports
The solution: Teams should be compiling relevant data regardless in order to track Trust & Safety KPIs. Learn more about measuring Trust & Safety here. Furthermore, It is important to remember that users are not necessarily expecting a platform to publicize transparency reports, which gives companies room to start small and scale up at their own pace.
The challenge: Currently, there is no one way to classify and report platform activities. What shows meaningful transparency isn’t always clear, resulting in different platforms reporting different metrics based on what they understand to mean transparent. The lack of consistency and standardization makes it challenging for companies looking to begin reporting their activities.
The solution: The lack of standardization should not dissuade companies, and it even can be advantageous. Leaving room for innovation and the development of new standards can only add meaning to transparency.
The challenge: In rare cases, companies have received backlash for under-reporting. That is, companies receive criticism for omitting certain information and are accused of only sharing data that shows them in a positive light.
The solution: Typically, the companies that receive backlash for their reports are high-visibility platforms. Such criticism is not likely for platforms of small- to medium-size. However, to avoid such risks, platforms should strive for full transparency and provide as much information as possible. Reports should also be clear to users so they can understand that proper systems and processes are in place, increasing trust in a platform. Another practice that should be implemented, is to provide real-time transparency. This means that when a user’s account or content is suspended or removed, users receive a clear explanation for their violation and can appeal. This will contribute to building trust with platform audiences.
Transparency reporting offers online platforms many benefits, the most obvious of which is what it signals – trustworthiness.
Transparency reports signify that a company is sincere about being open, safe, fair, and honest. They’re a tool to communicate company values by showcasing their commitment and efforts to make their platform proper and safe. Building or rebuilding trust, reports form relationships with users, reassuring them that their communications, online presence, and representations of who they are, remain safe.
Transparency reports reveal how governments interact with user data and the demands they place on platforms. This holds governments accountable for maintaining the privacy of users, a value that many customers deeply value.
Often overlooked, transparency reports can potentially contribute to public policy discourse. By reporting on extremist activities, propaganda, disinformation, and more, context is added to conversations on how to combat these issues. Policymakers, think tanks, governments, and platforms can all benefit from this data. Being transparent about these issues also creates the opportunity for the industry to collaborate on solutions.
Companies should plan their next steps in implementing transparency reports to gain users’ trust, demonstrate transparency, and meet potential requirements. To do so, Trust & Safety teams should:
As we’ve learned, transparency reports shed light on how companies moderate content and interact with government entities. Meeting regulation obligations and giving platform communities visibility, which increases openness and trust with users, is a win-win for online platforms.
Access our guide for everything you need to know about transparency reports, including a review of relevant legislation, recommended practices, and a free, customizable reporting template.
Learn 8 key insights from the Crimes Against Children Conference, where child safety experts discussed sextortion, the impact of generative AI, and more.
Read about the latest updates in ActiveOS and ActiveScore that improve granular PII detection and enhance protection against multiple threats.
Explore the alarming rise in online financial sextortion targeting minors - Discover the latest advanced detection methods, and strategies to combat this global threat.