Protect your family from hateful comments

The first technology that alerts you in real time if your children are perpetrators or victims of hateful comments on social networks.

How it works

1
Connect
Connect your child's social media accounts in our mobile application.
2
Detect
Bodyguard monitors and detects hateful comments 24/7.
3
Alert
If an incident takes place, Bodyguard will alert you in real time.
4
Educate
If the incidents are recurring, Bodyguard will provide you with support.

They trust us

Human moderation is still largely being used to manage hateful comments. However, this solution has its limitations: biases, not enough reactivity, risk of depression for moderators, and a high cost. Moderation as we know it today needs to evolve to be able to closely analyze interactions between users. Because of its large quantity, it’s becoming difficult to moderate all the content on networking platforms and social networks, brand channels, media platforms and gaming platforms, blogs. GAFA (Google-Amazon-Facebook-Apple) companies use artificial intelligence (machine learning and deep learning) to detect some of the hateful content on their platforms and moderate it automatically. This technology serves as a filter but has many biases, not to mention the error rate and many ‘false positives’. This excessive, yet limited moderation is being used to the detriment of users.

Press