Online toxicity survey: how brands are facing hateful and junk content
Want to better understand online hate? Are you looking to provide a safe place for your community and protect your brand? If so, this guide is for you!
The Bodyguard.ai team
It has become increasingly common for people to integrate the Internet and social media into their daily routines, exposing them to a greater variety of content - positive and negative. We are also seeing a shift in the relationship between consumers and brands from physical, in-store interactions to online ones, from 1-to-1 dialogues to forums and chat rooms. We all know that for any company, the Internet is a chance! It is an opportunity to communicate with its customers, users or followers to better engage its online community. This also means being exposed to toxic content (hate, spam, and fraud to name a few).
Why does online toxicity matter to brand reputation? What percentage of hateful comments are posted on social networks? What type of toxicity can businesses face online? What are the consequences of mismanaging online negativity? How can brands handle online toxicity and create an engaged community?
To demonstrate the extent of online toxic behaviour and examine online toxicity directed at brands and their communities, Bodyguard.ai decided to conduct an in-depth study of its customer communities from July 2021 to July 2022. This survey has been conducted on more than 170 million analyzed comments, made in a 12-month period, across 1,200 brand channels.
To find how to best manage toxic content, download the white paper here!
Want to know more before downloading?
This white paper is dedicated to helping companies better understand and manage online toxicity on social media. It provides a useful insight into the potential scale of the problem facing brands that do not have appropriate moderation in place. These 170 million comments represent just a microcosm of online interactions and yet, a trained human moderator needs around ten seconds to assess a comment - meaning the white paper would have taken almost 54 years to produce if each comment had been individually analysed. Today’s social media platform algorithms deployed to moderate content and many technologies in the moderation market have an error rate of around 20-40%, largely because of their inability to detect nuance, comments between friends made in a specific context, or other shortcomings in its machine learning.
What brands need is a technological solution based on striking the right balance between machine and human, between algorithms and linguistics, that analyses and understands in real-time the context of online discussions, with a cultural and linguistic approach. Contextual and autonomous moderation is the only way to offer clients and users a safe experience in a secure place where free speech is protected.
Matthieu Boutard, President and co-founder of Bodyguard.ai comments: “95% of potential brand customers we spoke to mentioned maintaining freedom of speech as a concern. Brands want customers to speak freely about poor service, or good! However, they also recognise that the line between personal opinion and personal attacks must be protected. In any shop or office you walk into, you will see a sign at the door saying that violence and harassment will not be tolerated. Yet too often, what would get us thrown out of a bar, shop or public building, is tolerated online. The human cost should not be overlooked! Behind every toxic comment, there is a real person that is targeted on the basis of gender, sexual orientation, culture or just for doing their job. Brands have an opportunity to get ahead of the issue and consider, plan and roll out moderation to protect their teams and their customers.”
It is time to make the internet a better, safer, freer place to connect and communicate.