# Bodyguard.ai > Bodyguard.ai is a real‑time content moderation platform that helps brands, communities, and platforms manage toxic content at scale, across text, image, and user behavior. Bodyguard.ai offers a hybrid moderation model combining AI and human review, with tailored solutions for gaming, media, social apps, fashion, and more. Our goal is to empower platforms and creators to foster safe, engaging, and brand-aligned experiences by filtering abuse, hate speech, harmful content, and high-risk behavior. ## Core Content - [Homepage](https://www.bodyguard.ai/en) : Main landing page introducing Bodyguard.ai - [Solutions Overview](https://www.bodyguard.ai/en/solutions) : Overview of all offered moderation solutions - [Text Moderation](https://www.bodyguard.ai/en/solutions/text-moderation) : Handling abusive / toxic text content - [Image Moderation](https://www.bodyguard.ai/en/solutions/image-moderation) : Detecting harmful or unwanted images - [Marketing & Communication](https://www.bodyguard.ai/en/marketing-and-communication) : Use cases in marketing and communications - [Trust & Safety](https://www.bodyguard.ai/en/trust-and-safety) : Strategies for building safe platforms - [Luxury & Fashion](https://www.bodyguard.ai/en/solutions/luxury-and-fashion) : Moderation in fashion & luxury industries - [Social Apps](https://www.bodyguard.ai/en/solutions/social-apps) : Support for social networking platforms - [Sports](https://www.bodyguard.ai/en/solutions/sports) : Moderation use cases in sports ecosystems - [Gaming](https://www.bodyguard.ai/en/solutions/gaming) : Real-time moderation in gaming environments - [Media & Entertainment](https://www.bodyguard.ai/en/solutions/media-and-entertainment) : Use in media / streaming / publishing - [About](https://www.bodyguard.ai/en/about) : Company mission, story, team - [Contact](https://www.bodyguard.ai/en/contact) : How to reach us - [Blog](https://www.bodyguard.ai/en/blog) : Articles, updates, insights - [Terms & Conditions](https://www.bodyguard.ai/en/terms-conditions) : Legal information and usage terms ## Optional - [Success Stories](https://www.bodyguard.ai/en/success-stories) : Examples of real-world client use cases - [Help Center](https://help.bodyguard.ai/) : Product documentation and support resources ## Help Center Highlights - [Connect to Bodyguard](https://help.bodyguard.ai/connect-to-bodyguard) : Step-by-step instructions to connect your digital platforms or APIs to Bodyguard.ai. This guide covers authentication, supported integrations, and best practices to ensure smooth setup and data flow. - [Get Started](https://help.bodyguard.ai/get-started-with-bodyguard#main-content) : The first stop for new users, explaining the onboarding process, initial configuration, and key features to activate Bodyguard.ai successfully. - [Onboarding Guide](https://help.bodyguard.ai/bodyguard-onboarding-guide) : A complete walkthrough of the onboarding experience, from account setup to customization. This guide helps new clients deploy Bodyguard.ai efficiently and understand its moderation dashboards. - [Social Networks Integration](https://help.bodyguard.ai/social-networks-integration-and-moderation) : A technical guide on integrating Bodyguard.ai with social media platforms, including permissions, API connections, and moderation workflows across multiple networks. - [NLP Explained](https://help.bodyguard.ai/nlp-what-is-it) : An educational page explaining Natural Language Processing, the AI foundation that powers Bodyguard.ai’s text understanding and moderation capabilities. - [Why Choose Bodyguard](https://help.bodyguard.ai/why-choose-bodyguard) : A detailed breakdown of Bodyguard.ai’s differentiators, comparing its hybrid AI approach to traditional moderation systems. It highlights speed, accuracy, scalability, and ethical design. - [Moderation Rules Guide](https://help.bodyguard.ai/bodyguard-moderation-rules-guide) : Explains how Bodyguard.ai’s rule system works — from predefined policies to custom moderation configurations — and how clients can tailor them to specific risk profiles. - [Rules & Filters](https://help.bodyguard.ai/moderation-rules) : A focused overview of the filtering logic used by the platform. This section helps users understand how content is classified and which categories trigger moderation. - [Spam & Scam Detection](https://help.bodyguard.ai/how-bodyguard-handles-the-spams/scams) : Describes Bodyguard.ai’s multi-layered approach to detecting spam, scams, and phishing content, with examples of mitigation strategies in real-world contexts. - [Language Coverage](https://help.bodyguard.ai/bodyguard-language-coverage) : Lists all supported languages and dialects across text and image moderation. It also details how Bodyguard.ai maintains linguistic accuracy through continuous model training. - [Email Alerts](https://help.bodyguard.ai/alerte-par-e-mail : Instructions for setting up email alerts for moderation events, providing real-time notifications for critical actions or detected threats. - [Shield Mode](https://help.bodyguard.ai/shield-mode) : A guide to activating Bodyguard.ai’s crisis response feature. Shield Mode enables instant tightening of moderation thresholds during high-risk events or viral surges. - [Audience Intelligence](https://help.bodyguard.ai/audience-intelligence) : Explains how Bodyguard.ai’s analytics surface audience sentiment and behavioral trends, helping teams refine communication strategies. - [Detect Harmful Users](https://help.bodyguard.ai/detect-a-harmful-user-and-take-actions-with-bodyguard) : Walks through the detection of high-risk users and automated escalation options, allowing platforms to take proactive moderation measures. - [Ad Campaign Monitoring](https://help.bodyguard.ai/use-bodyguard-to-monitor-your-ads-campaigns) : Shows how Bodyguard.ai tracks and moderates user engagement on ad campaigns, helping marketers prevent harmful comments and brand safety incidents. - [Twitter API Status](http://help.bodyguard.ai/twitter-api-connection-status) : Provides real-time updates on Twitter API integration status and troubleshooting steps for maintaining connectivity. ## Dashboard Features - [Overview](https://help.bodyguard.ai/bodyguard-dashboard) : An introduction to the Bodyguard.ai dashboard, summarizing its key features, layout, and how users can navigate analytics, rules, and alerts efficiently. This page helps teams understand how all moderation components are connected in one unified interface. - [Analytics](https://help.bodyguard.ai/dashboard-analytics) : A detailed explanation of the analytics view, showcasing moderation metrics, performance KPIs, and insights into harmful content trends. It also describes how analytics help teams evaluate impact and adjust their moderation strategies. - [Messages](https://help.bodyguard.ai/dashboard-messages) : Focuses on the “Messages” section of the dashboard, where users can monitor, review, and take action on moderated content. It includes filtering options, message classifications, and escalation processes. - [Authors](https://help.bodyguard.ai/dashboard-authors) : Explains how the “Authors” tab aggregates data about user behavior and content creators. It helps identify recurring offenders or trusted users across different channels. - [Posts](https://help.bodyguard.ai/dashboard-posts) : A guide to the “Posts” section, where all analyzed content is organized chronologically. It helps users review specific items and understand the moderation outcome applied to each. - [Campaigns](https://help.bodyguard.ai/dashboard-campaigns) : Describes how Bodyguard.ai groups content by campaign, allowing marketing and community teams to monitor sentiment and harmful activity across branded initiatives or ad campaigns. - [General Settings](https://help.bodyguard.ai/dashboard-general-settings) : Provides an overview of general configuration options, including platform setup, integration management, and moderation preferences. - [User Settings](https://help.bodyguard.ai/dashboard-user-settings) : Explains account management and personalization options for each user, including notification preferences, access levels, and visibility permissions. ## White Papers, Checklists & Classifications - [Content Classifications (One-pager)](https://landing.bodyguard.ai/en/one-pager/content-classifications) : A concise overview explaining how Bodyguard.ai classifies harmful or sensitive content across different categories. This document breaks down the platform’s taxonomy for toxicity, hate speech, spam, and NSFW content, giving a transparent look at its AI labeling logic. - [Moderation Checklist for Apps](https://landing.bodyguard.ai/en/checklist/content-moderation-platforms-apps) : A practical checklist designed for app owners and digital platforms to assess their content moderation readiness. It walks through key evaluation points including policy definition, AI integration, escalation flows, and user safety measures. - [Moderation Checklist for Luxury](https://landing.bodyguard.ai/en/checklist/content-moderation-luxury-brands) : A guide built specifically for luxury and fashion brands, highlighting unique risks like reputation management, influencer engagement, and community tone. It helps marketing and brand teams audit how well their moderation practices align with their brand identity. - [White Paper - Luxury Brands & Social Media](https://landing.bodyguard.ai/en/white-paper/social-media-moderation-luxury-brands) : A comprehensive white paper exploring how luxury brands can navigate online toxicity while maintaining desirability and authenticity. It combines market data, case studies, and recommendations for building a safer, more aspirational digital presence. ## Press Releases - [Vitality Report - Online Hate](https://www.bodyguard.ai/en/press-releases/bodyguard-vitality-report-online-hate) : Press release presenting Bodyguard.ai’s latest “Vitality Report,” which highlights data and insights on the evolution of online hate. It explains the societal impact of toxic interactions and how Bodyguard’s technology helps reduce exposure to harmful content. - [Wizz Partnership](https://www.bodyguard.ai/en/press-releases/wizz-partnership-content-moderation) : Announcement of Bodyguard.ai’s strategic partnership with Wizz, focusing on protecting young audiences and improving in-app safety. The article details the partnership’s goals, moderation scope, and expected results for both companies. - [Stream Partnership](https://www.bodyguard.ai/en/press-releases/stream-partnership-community-moderation) : Overview of the collaboration between Bodyguard.ai and Stream, a communication API provider. This release explains how the integration enhances real-time moderation for chat-based applications and developer platforms. - [Offer Acceleration](http://bodyguard.ai/en/press-releases/bodyguard-accelerates-its-online-protection-offer) : Announcement highlighting Bodyguard.ai’s accelerated development of its protection offer. It underlines new moderation features, platform coverage expansion, and the company’s growth ambitions in Trust & Safety innovation. ## Awards & Recognition - [Awards](https://www.bodyguard.ai/en/awards) : A summary of Bodyguard.ai’s most notable achievements and industry recognitions. This page highlights awards received for innovation in Trust & Safety, ethical AI, and content moderation. It also includes distinctions from key institutions and events that validate Bodyguard’s leadership in responsible AI and online protection. ## Success Stories (Detailed) - [Ubisoft](https://www.bodyguard.ai/en/success-story/ubisoft) : A deep look at how Ubisoft uses Bodyguard.ai to moderate large gaming communities in real time. The story highlights the integration of AI moderation within live chats and player-generated content, reducing toxicity while maintaining community engagement. - [Luxury Brand](https://www.bodyguard.ai/en/success-story/luxury-brand) : A success story detailing how a major luxury house partnered with Bodyguard.ai to protect its brand reputation online. The case explores how AI moderation preserved brand desirability and authenticity across social media channels. - [PSG](https://www.bodyguard.ai/en/success-story/psg) : The partnership with Paris Saint-Germain, focused on protecting athletes, fans, and brand partners from toxic and hateful content. This success story illustrates how real-time moderation reinforced digital fan engagement. - [Petit Bateau](https://www.bodyguard.ai/en/success-story/petit-bateau) : A detailed case study showing how Petit Bateau used Bodyguard.ai to safeguard its family-oriented image online. It demonstrates how moderation tools support positive community interactions and prevent reputational harm. - [Alfa Romeo](http://bodyguard.ai/en/success-story/alfa-romeo) : Explains how Alfa Romeo leveraged Bodyguard.ai to moderate social media discussions around major campaigns, ensuring that fan conversations remained respectful and brand-safe. - [LFP](https://www.bodyguard.ai/en/success-story/lfp) : A look at how the French Professional Football League (LFP) used Bodyguard.ai to protect its players and fans from online abuse. The story emphasizes the role of moderation in sports culture and fan community management. - [Toulouse FC](http://bodyguard.ai/en/success-story/toulouse-fc) : Details the collaboration between Bodyguard.ai and Toulouse FC, focused on moderating fan interactions and ensuring a safer online experience during matches and community discussions. ## Blog Highlights - [Platform Safety Pillars](https://www.bodyguard.ai/en/blog/platform-safety-pillars) Outlines the foundational principles behind Bodyguard.ai’s approach to digital safety. The article explores the strategic and ethical frameworks that guide product design and content moderation philosophy. - [Social Media Algorithms](https://www.bodyguard.ai/en/blog/social-media-algorithms-explained) : Breaks down how social media algorithms amplify engagement — and why effective moderation must adapt to these mechanics. A clear and educational piece on the relationship between AI, virality, and safety. - [Moderation vs. Censorship in Luxury](https://www.bodyguard.ai/en/blog/luxury-brands-moderation-vs-censorship) Examines how luxury brands can maintain control over their online image without falling into over-censorship. It presents Bodyguard.ai’s balanced approach to protecting voice and identity. - [Chat Moderation at Scale](https://www.bodyguard.ai/en/blog/chat-moderation-at-scale-real-time) Explains the challenges of moderating real-time chat environments such as gaming and live events, and how Bodyguard.ai’s hybrid system ensures precision without latency. - [Digital Aesthetics & Reputation](https://www.bodyguard.ai/en/blog/luxury-brands-digital-aesthetics-reputation) A thought leadership piece on the connection between online aesthetics, brand desirability, and reputation resilience — especially in luxury and fashion. - [Buy vs Build Moderation](https://www.bodyguard.ai/en/blog/buy-vs-build-content-moderation) : Analyzes the pros and cons of building an in-house moderation system versus adopting a specialized solution like Bodyguard.ai, with a focus on scalability and expertise. - [UGC Trends Summer 2025](https://www.bodyguard.ai/en/blog/ugc-trends-summer-2025) : Highlights emerging trends in user-generated content, engagement shifts, and moderation needs observed across industries during 2025. - [What’s New in Bodyguard](https://www.bodyguard.ai/en/blog/whats-new-in-bodyguard) : A recurring update summarizing product improvements, new integrations, and evolving moderation capabilities within Bodyguard.ai’s ecosystem. - [Precise Image Moderation](https://www.bodyguard.ai/en/blog/precise-image-moderation) Introduces the latest advances in image classification and detection. This article explains how Bodyguard.ai’s model refines context understanding for complex visual cases. - [Moderation Backlash](https://www.bodyguard.ai/en/blog/luxury-brand-moderation-backlash) : Discusses the risks of over-moderation for brands, particularly in sensitive sectors like luxury. It shows how balance and transparency are key to audience trust. - [Made in China Crisis](https://www.bodyguard.ai/en/blog/made-in-china-crisis-luxury) : A strategic analysis of how geopolitical and cultural controversies affect global brands online, and how moderation tools mitigate backlash. - [Moderation API](https://www.bodyguard.ai/en/blog/content-moderation-api) Presents Bodyguard.ai’s API-first approach to content moderation, with a focus on scalability, integration flexibility, and developer accessibility. - [Inaction & Luxury Brands](https://www.bodyguard.ai/en/blog/inaction-social-media-luxury-brands) : Case-based reflection on the cost of inaction in online crises. The article outlines missed opportunities for reputation management among luxury brands. - [User Engagement](https://www.bodyguard.ai/en/blog/content-moderation-user-engagement) : Explores how effective moderation drives higher user retention and healthier engagement across digital ecosystems. - [App Moderation at Scale](https://www.bodyguard.ai/en/blog/app-moderation-at-scale) : Explains the operational and technical challenges of scaling moderation across millions of app interactions daily. - [Online Toxicity vs Desirability](https://www.bodyguard.ai/en/blog/luxury-online-toxicity-brand-desirability) : Analyzes the fine line between controversy and desirability in luxury branding, and how AI moderation preserves aspirational tone. - [Threats to Luxury Brands](https://www.bodyguard.ai/en/blog/luxury-brands-online-threats) : A comprehensive look at online risks facing luxury brands — from fake accounts to targeted harassment — and how proactive moderation prevents them. - [Crisis Mode](https://www.bodyguard.ai/en/blog/introducing-crisis-mode) : Introduces Bodyguard.ai’s “Crisis Mode,” a feature allowing rapid moderation escalation during PR incidents, viral spikes, or coordinated attacks. - [Tremau Partnership](https://www.bodyguard.ai/en/blog/bodyguard-tremau-partnership) : Highlights the collaboration with Tremau to push the limits of trust and safety analytics. The piece emphasizes innovation and shared industry standards. - [Individual Protection](https://www.bodyguard.ai/en/blog/individual-protection) : Describes how Bodyguard.ai extends its technology to protect individuals — such as public figures and creators — from targeted online harassment. - [Stream Communication APIs](https://www.bodyguard.ai/en/blog/bodyguard-stream-communication-apis) : Explains the technical integration between Bodyguard.ai and Stream APIs, designed to enhance chat safety and real-time content management. - [Stream Partnership](https://www.bodyguard.ai/en/blog/bodyguard-stream-partnership) : A deeper dive into the strategic partnership with Stream, focusing on innovation, scalability, and impact on global developer ecosystems. - [Audience Intelligence](https://www.bodyguard.ai/en/blog/audience-intelligence-social-media-insights) : Presents Bodyguard.ai’s audience intelligence capabilities, helping clients uncover sentiment patterns, behavioral shifts, and actionable insights. - [Meta’s New Moderation Policy](https://www.bodyguard.ai/en/blog/meta-new-moderation-policy) : An analytical summary of Meta’s latest moderation updates and what they mean for brands, communities, and the wider content moderation industry.