Behind a seemingly simple idea hinds a complex mission : That of protecting internet users in real time against cyberbullying. Why ? Because there exists no other application of of this kind. In regards to the moderation of different platforms, it is simple not enough.
Being able to analyse the context in which a comment is formulated, and determine the person or people to which they are aimed towards.
The hardest part is grasp and interpret the thought process. You must undersatnd irony, sarcasm or humor. A layer of artificiel inteligence is essentil to decrease false positives (content that has been detected as hateful when they are not) and increase accuracy.
The predictive model also allows the technology to understand the relationship between two individuals. For example, is the author of a comment a “subscriber” to the person to whom he or she answers? This required research, as well as the crossing of more than 80 metadata. These include the reaction time after publication, the percentage of capital letters or the profile picture.
A manageable, easy-to-use service that speeds up production.
We chose to use OVHcloud AutoML, a distributed and scalable machine learning platform. This Software as a Service (Saas) solution has allowed us to automate the creation, deployment and query processes of machine learning models. It also gave us the possibility to integrate open source algorithms.
OVHcloud AutoML has accelerated the development phase. 10 days were required for the creation of the Bodyguard predictive model and 20 days for the development of the meta-learning model, which analyzes the relationship between the author of the content and the commentator.
Thanks to this model, the detection rate by the Bodyguard technology increased by 12,5 %, going from 80 to 90 %. The number for false positives has decreased by 50%, from 6% to 3%.
It took 2 years to develop the final learning algorithm and integrate it into a free mobile appication, available on Androit and iOS since October 2017. Today Bodyguard deletes hateful comments in real time on YouTube, Instagram, Twitter, Twitch et Mixer.
In July 2019, this virtual bodyguard attracted several thousand users and reached a 97% satisfaction rate. Reason for success :
The application will soon be translated into English and Italian. A new solution called “Bodyguard for Families” will also be developed to immediately alert parents if their children are being cyberbullied.
In the long term, Bodyguard aspires to position itself as a cloud provider of automatic moderation solutions by AI. For this, Bodyguard will make its technology available via an API. It is aimed at all those who want to protect themselves, their loved ones, users, image, reputation or employees.