List of all available severities that Bodyguard.ai can detect:
- A message that poses an immediate threat to the safety or well-being of individuals or groups, such as messages that incite violence.
- A message that is likely to cause significant harm or damage, such as messages that contain hate speech or discrimination against protected groups.
- A message with the potential to cause harm or distress, but may not be as urgent or severe.
- A message that may be offensive or inappropriate.
- A message not considered harmful or offensive.