Bodyguard.ai API Documentation

Bodyguard.ai is an advanced moderation solution that allows you to effectively protect your online platform from toxic content.

With our API, you have the ability to define and regulate the type of content that is deemed inappropriate on your platform. This is achieved through contextual analysis, which categorizes the content accurately based on its context and meaning.

Bodyguard.ai is designed to be scalable, making it an ideal solution for both small- and large-scale platforms.

Whether you are running a community-driven website, social media platform, or any other type of online platform, Bodyguard.ai provides the intelligence and control you need to ensure a safe and secure environment for your users.

Prerequisite

To get started, make sure that you have created an account on Bodyguard.ai and have access to your dashboard. Next, create a new API key in your admin settings and learn how to make requests for the action you want to perform using our HTTP APIs.

BASE URL

https://bamboo.bodyguard.ai/api

Getting started

Quickstart
Learn how to analyze content in a minute with the Bodyguard.ai Api

Learn more

Concepts
Learn all about Bodyguard.ai vocabulary and classifications taxonomy

Learn more

Authentication
Learn how to authenticate requests and access Bodyguard.ai API features

Learn more

Limitations
Learn about our API technical restrictions and optimizations best-practices

Learn more

Errors
Learn how to handle any types of errors returned by Bodyguard.ai API

Learn more

Webhooks
Learn all about Bodyguard.ai webhooks features and how to use them

Learn more

Endpoint

Analysis

Learn about the analysis endpoint and how to create a request for text message diagnosis.