December 4, 2025

Bodyguard x RTBF: Building smarter moderation together

By Future Media Hubs
View all posts

Share

Bodyguard x RTBF: Building smarter moderation together

Guest blog by Future Media Hubs



In an era where online hate and misinformation are dominating comment sections, public broadcasters are under growing pressure to keep conversations civil without silencing voices.

RTBF, Belgium’s French-speaking public broadcaster, tackled this challenge through an innovative partnership with the French startup Bodyguard, a company specializing in AI-powered content moderation. Together, they build a smarter, human-centered approach to social media moderation, offering lessons for every media organization navigating the digital world. 


The Challenge  

Like most public broadcasters, RTBF operates in an increasingly polarized digital landscape where social platforms have scaled back their own moderation efforts. Every month the organization manages roughly 470,000 comments across Facebook, Instagram, YouTube and TikTok. Of these, around 80,000 comments are automatically hidden and even more get flagged for review. This means that about 17% of all user contributions contain potentially offensive, misleading, or harmful content.  

The rejection rate fluctuates by genre: close to one in four comments on news-related posts require moderation, compared to a much smaller share on entertainment focused content. Before Bodyguard, RTBF worked with another company that provided human moderation. But the sheer volume and variability of moderation made it clear that relying solely on human moderation was no longer sustainable. RTBF needed a smarter, more scalable system that could uphold its editorial standards while preserving open dialogue with the public.  


The Solution: AI with Human Oversight  

In October 2024, RTBF partnered with Bodyguard, a French startup specializing in AI-powered content moderation, to create a system where artificial intelligence works hand in hand with human oversight.  

The new setup provided a centralized dashboard where moderators can track live comment streams, sentiment levels and rejection rates across all social media accounts like Facebook, YouTube, Instagram and TikTok. It allows them to identify emerging risks faster and focus human attention on the discussions where context and editorial judgment matter most. For example, debates around sensitive political topics or breaking news events.  

Facebook remains RTBF’s busiest platform by far, generating the largest share of user comments and toxicity alerts. This is a reflection of broader industry patterns, where established social networks continue to host the most active and polarized public conversations. By contrast, platforms like Instagram and TikTok, while growing in engagement, present fewer but more fragmented moderation challenges. 

The partnership between RTBF and Bodyguard quickly evolved into a co-designed system. One that reflects both the complexity of Belgium’s media environment and the realities of digital discourse today. Instead of deploying an out-of-the-box AI filter, the two teams built a tailored framework that balances automation, cultural nuance and human oversight. 

  • Smart categorization: Bodyguard qualifies comments into categories and assesses post risk- levels based on both language intensity and engagement levels, helping RTBF moderators to prioritize where to intervene first.
  • Low risk might include mild sarcasm or off-topic remarks.
  • Medium risk typically signals recurring negativity or controversial discussion threads that could escalate.
  • High risk involves direct hate speech, threats, or organized trolling activity. When moderators manually correct a misclassified comment, the AI adjusts its understanding for future cases, making the system gradually more accurate over time.
  • Custom word lists: The continuously growing list includes common slurs and national-context terms (ex: political nicknames, regional stereotypes, or coded replacements such as using “Swedes” to refer to migrants).
  • User management: Ability to mute or ban toxic users, with muting preferred to avoid escalation
  • Automated alerts: Every social media account monitored by Bodyguard has a predefined toxicity threshold: a set percentage of negative or harmful comments tolerated before an alert is triggered. For example, if more than 20% of daily comments on a Facebook page are flagged as toxic or high-risk, the system immediately sends an email notification to RTBF’s moderation team.
  • Dedicated human teams: Specialized moderators, familiar with Belgian political context for sensitive topics, are part of the team. 


Current Limitations and Future Development  

Despite its strong results, RTBF notes that the system still faces a few practical limitations. 

  • AI continues to struggle with sarcasm, irony and cultural context, which means human oversight remains essential for accurate moderation.
  • Due to API restrictions, moderation on TikTok is only partially supported, resulting in lighter and more reactive monitoring compared to other platforms.
  • Moderators cannot yet respond to comments directly from within Bodyguard, requiring them to switch between multiple systems. 

Bodyguard is actively working on these areas, with plans to integrate generative AI for improved contextual understanding and to add direct reply functionality to streamline moderation across platforms. 


Conclusion  

The partnership has expanded beyond RTBF, with VRT (Flemish public broadcaster) also piloting the system and BBC exploring similar collaborations. This demonstrates the scalability of the approach across different media organizations and markets. 

RTBF’s collaboration with Bodyguard demonstrates that artificial intelligence can strengthen rather than replace human judgment in digital moderation. By combining automation with editorial expertise, RTBF has built a system that protects open dialogue while preserving context, nuance and trust. The project’s success lies in treating AI as a long-term partner: a tool that evolves through continuous feedback, cultural adaptation and ethical reflection. This approach offers a blueprint for any media organization, public or commercial, seeking to manage online communities with both efficiency and empathy.

Popular Insights

Solutions
Text moderationImage moderationVideo moderationAudience Understanding
Helpful Links
Build vs. BuySupportTechnical DocumentationTrust Center
About
CompanyCareersMedia KitContact us

© 2025 Bodyguard.ai — All rights reserved worldwide.

Terms & Conditions|Privacy Policy