Bodyguard included in the Twitter Toolbox to build a safer Internet

I have great news to share with all our clients and users: Twitter tested Bodyguard as a user-generated content moderation technology and integrated our solution into the new "Twitter Toolbox" announced last February.

Charles

Charles

A shared vision for a better Internet

A few months ago, Twitter contacted me about linking Bodyguard to a program designed to surface and highlight developer-built solutions to give users a better Twitter experience.

This program is named ‘Twitter Toolbox’. It’s a hub (in beta phase) where people on Twitter will be able to easily discover, learn more about, and quickly sign-up to third-party developer tools to enhance their Twitter experience. Twitter Toolbox includes self-serve, low-cost, or even free tools to help people express themselves more creatively, protect them from harmful content and profiles, and measure their content’s performance on Twitter.

This was the perfect opportunity for me to share my analysis of toxic content and its sometimes platform-specific variations, and propose a test of our proprietary technology.

Our shared vision quickly surfaced: Twitter should be and must always remain a platform where everyone is free to express themselves safely. Using intimidation or harassment to silence users is unacceptable, let alone when it is due to political opinion, gender or race. No one deserves to be the target of a hate campaign. This ideal is attainable and merits our constant commitment and effort.

Bodyguard: a tool that solves a pressing user need

Twitter tested Bodyguard's solution with an analysis based on two inseparable pillars:

  • Our technological approach to determine how our tool works and protects users;

  • Our product approach and our general philosophy regarding the fight against online harassment.

The result? A record 90% detection rate for all types of toxic content on European and US accounts!

Bodyguard: How does it work?

The implementation of Bodyguard allows us to go beyond the simple reporting tool offered to Twitter users. It incorporates several functions:

  • The automatic detection and shadow banning of hate messages according to their type: insults, threats, trolling, body-shaming, homophobia, racism, misogyny, sexual or moral harassment etc.

  • Removal of unwanted content: spam, scam attempts, disguised advertisements – everything that violates discussion spaces;

  • Banning and shadow banning users;

  • Personal protection settings (only delete malicious comments intended for you) or general settings (delete all hateful content);

  • Different levels of moderation (none, low, medium, high or maximum).

On Twitter, Bodyguard facilitates the moderation of all toxic comments, including replies sent to users of our tool. How? By hiding replies at the end of the comment thread: this principle, therefore, maintains the social network's engagement rate, since messages are hidden and not deleted. Likewise, a blocked contributor will no longer be able to follow the user's account.

Bodyguard's solution is now part of Twitter Toolbox and, is currently available to all the social network's members. Users can access a free tool via a mobile app, or opt for a paid version that takes advantage of extensive custom moderation and provides a dashboard and comprehensive analytical tools for community management.

Today, Bodyguard continues to assume its status as a leading online moderation solution for large social networks. Our team is committed to making Twitter, and other platforms, a safer and more welcoming environment.

The Twitter Toolbox integration includes first-hand access to developer-built solutions that enhance the social network. This will allow us to continuously evolve our technology and stay one step ahead of toxic users..

Charles Cohen