An autonomous and intelligent moderation solution


Acquisition, retention, engagement and

e-reputation: Bodyguard protects you!

A unique technology to protect...

Your platform

Healthy conversations increase time spent on your platform by 60%

Your clients

A clean platform helps you retain your customers and users up to 3 times better

Your revenue

A clean platform will ensure on average 16% more advertising revenue

Your image

Do not associate your brand and its image with toxic content on your platform


Choose Bodyguard

A protection

We act as a real-time filter with a 90% detection rate of toxic comments


A tool that allows you to save on time and on work force.

Your rules

You select your own moderation criteria.


Available in real time 24/7.

Test the technology
Discover our amazing technology

→ Bodyguard understands misspelled words and typos.

→ Bodyguard understands SMS language.

→ Bodyguard only analyzes the insults that are aimed towards you personally.

→ Bodyguard doesn’t delete critiques or negative comments.

→ Bodyguard understands English, French and Italian.

Our tailored solutions
Discover the solutions we have developed for each industry.

The responsibility that social media platforms and companies have when it comes to removing hateful content is today being reinforced by new laws that are emerging around the world (example: the Avia law in France, NetzDG in Germany).


Human moderation is still largely being used to manage hateful comments. However, this solution has its limitations: biases, not enough reactivity, risk of depression for moderators, and a high cost. Moderation as we know it today needs to evolve to be able to closely analyze interactions between users. Because of its large quantity, it’s becoming difficult to moderate all the content on networking platforms and social networks, brand channels, media and gaming platforms, blogs. GAFA (Google-Amazon-Facebook-Apple) companies use artificial intelligence (machine learning and deep learning) to detect some of the hateful content on their platforms and moderate it automatically. This technology serves as a filter but has many biases, not to mention the error rate and many false positives. This excessive yet limited moderation is being done to the detriment of users.


Bodyguard has developed a software solution to automatically and intelligently moderate all your user-generated content (Facebook pages, websites, mobile applications, comment sections etc.). The technology analyzes your content and informs you of any detected toxic content, specifies the type of content (insults, mockery, trolls, sexism, racism, threats) and the level of severity. With Bodyguard’s SaaS platform, you choose the moderation level you want and keep an eye on your data. Our technology detects 90% of hateful content, with an only 2% error rate. This will make your organisation’s content moderation process much easier, saving a considerable amount of time, and, most importantly, protecting you and your community from cyberharassment and hate speech in an efficient and secure manner.