Below is a list of all available types that Bodyguard.ai can detect:
- Content that is explicitly violent or discriminatory. This takes the form of racism, LGBTQIA+phobia, misogyny, ableism, dominance, moral or sexual harassment, and threats.
- Content that has malicious intent, including insults, denigration, aggression, condescension, and body-shaming.
- Content that disturbs the experience of followers as well as normal discourse between Internet users.
- Content with a negative tone, but which is not hateful.
- Content that displays neither supportive nor problematic content.
- Supportive, fairplay, encouraging or body-positive content.
- Belonging to none of the above categories.