Today, social networks are part of our daily life and gives the opportunity to millions of people to discuss and debate peacefully. You almost certainly know Twitter, Twitch, Youtube, Facebook or even other online public and private communities based around sports, gaming, entertainment, etc.
However, it would be a mistake to think that all exchanges are positive and benevolent. A lot of content are toxic, violent and sometimes even dangerous...
Bodyguard consists of three parts:
An API that allows any organization to protect their communities
A dashboard with metrics that allows you take actions (ban, etc.)
This documentation is focused on our API and the purpose is simple: explain how it works and describe how to use it.
We improve this documentation every week with more informations, examples and ready to use multiple languages code snippets.
Bodyguard provides an API that responds in milliseconds to any messages with a detailed context which contains :
Message classifications based on smart granularity
Action to take that allows you to automate moderation or to privilege human recommended actions
Context that our algorithm understands (target, channel's contextual informations, etc.)