guides / Getting Started
Majdi avatar

Majdi

How Bodyguard Works

Bodyguard’s role is to retrieve and analyze the content via our API, providing a verdict (HATEFUL, NEUTRAL, SUPPORTIVE) and recommended action (KEEP, REMOVE) based on that verdict.


The recommended action takes into account your own preferences, i.e. moderation needs/settings. For example, if an analyzed message comes up as HATEFUL but has LOW severity, if you have opted for letting low-severity content through, the recommended action would be KEEP rather than REMOVE.

We don’t remove the toxic content ourselves, we tell the platform what the recommended action is. Based on the API recommendation, the platform automatically keeps or removes the content, or can be configured to do other actions too - such as hide the content, store it somewhere etc.

The thing to remember is that Bodyguard allows full automation of the moderation process - no human intervention is needed to go through the recommended actions and perform them manually.