DocumentationEverything about our Tech

Start using a powerful smart content moderation API in minutes and get a full overview of all the amazing content protection features

Try for Free
Puzzle

Welcome to Bodyguard developer guide

Bodyguard is a powerful technology enabling intelligent protection and scalable moderation that you can fully customize. Bodyguard's powerful API lets you quickly moderate toxic content categories depending on a specific contextual analysis.

Latest documentation's update

  • October, 12th 2022

    /analyze - Set param publishedAt to mandatory in AnalysisRequest

    The `publishedAt` param will now be required. This is key parameter that allows our system to operate optimally. It allows us to give you the best accuracy for features such as SPAM detection.

  • September, 21st 2022

    /analyze - Add param defaultLanguage to AnalysisRequest

    A new param `defaultLanguage` is available on the `/analyze` endpoint. This param will be used in case of language detection failure. When you don't set the `language` param, our system will try to auto detect the language of the text sent for analysis. In case we do not manage to detect the language, the `defaultLanguage` will be used.

  • September, 20th 2021

    New release with incoming webhook triggers

    There they are, the webhooks! They allow you to create advanced workflows based on the actions that Bodyguard takes with your data sources

  • September, 17th 2021

    Hello world

    Welcome to our new Bodyguard documentation. You'll find anything you need to use our Api and you can follow our documentation updates in this change log section ✨

Bodyguard developers guide

Home
Welcome to Bodyguard's documentation, you'll get on this page a full overview of this guide.
Getting Started
Simple technical and non-technical introduction about bodyguard ecosystem and more important a step-by-step tutorial to let your moderate toxic content in minutes.
Authentication
Our authentication schema provide a secure way of identifying the calling user. Endpoints also checks the authentication token to verify the request permission and the rate limiting.
Resources
Smart moderation means intelligent analyse and that's what we do at Bodyguard. The resource part explain how we fetch content and all different toxic content categories and severity levels, contextual analysis, etc.
Analyze
The analyzer is a simple endpoint that will allow you to send us all messages you want.
Webhook
With the incoming webhook trigger, you can connect Bodyguard actions with many of the Apis, tools or products that you and your team use.

Ready to dive in?Start moderate for free today 💪

No credit card required

Dashboard