June 15, 2022

Does content moderation rhyme with censorship

By The Bodyguard Team
View all posts

Share

Content moderation is a huge responsibility. If you run any kind of social platform or online community, users are relying on you to protect them from exposure to online toxicity, which in its worst forms can cause anxiety, depression, stress disorders, and even substance abuse.

But moderating harmful content can occasionally come into conflict with freedom of speech, which, by its very nature, permits the expression of almost any kind of view.

What one person considers worthy of expression can easily be considered hateful and in need of moderation by another. Can content moderation, therefore, be considered a form of censorship? Here, we explain how moderation and free speech can coexist, and why moderation is a crucial aspect of a functioning online community.

With Elon Musk’s recent purchase of Twitter, content moderation, the limits of free speech are back in the headlines. Critics of the purchase claim that Musk’s laissez-faire attitude to free speech will increase hate speech, fake news, and other toxic content. Proponents argue that Twitter and other social platforms have too often crossed the line from moderation to censorship and that Musk will rectify this.

Privately-owned social media platforms’ specificities 

A useful starting point in the moderation v free speech debate is to note that privately-owned social media platforms and networks have the right to set their own guidelines and moderation rules on what constitutes acceptable content. They aren’t legally bound to host anything that they deem to be inappropriate for their users. They also have a duty to filter out illegal content and remove disinformation. Because of these factors, platforms may seem to be engaging in censorship and inhibiting free speech, but in reality, they’re operating well within the law. On top of this, it’s important to note that other forms of moderation (e.g. fact-checking) shouldn’t be considered censorship because they don’t involve prohibiting speech.

The main difference between free speech and hate speech

There’s also a key distinction to be made between free speech and hate speech. The definition of free speech can generally be understood as a fundamental human right to express almost any view, even those that may offend others. But the right to free speech doesn’t extend to hate speech, which is generally defined as an offensive statement designed to deliberately incite violence or hatred against another person or group’s race, religious belief, gender, sexual orientation, or other characteristics. It’s not always easy to say where expressing controversial views ends and hate speech begins: but factors in determining the difference often include intent, the context of expression, the intended audience, and the particular words chosen. For this reason, efforts to limit hate speech on social platforms can sometimes look like efforts to censor particular views. 

Smart moderation at the service of user experience

Another point to consider is that moderation often plays a vital role in creating a positive user experience and encouraging free speech rather than damaging it. Users are more likely to engage and share viewpoints if they feel respected and aren’t afraid of a backlash.

A viable solution to encourage free speech while protecting users through moderation comes in the form of smart moderation. This means using software to automatically filter out content that is harmful or not conducive to a healthy debate.     

At Bodyguard, our solution uses artificial intelligence to analyze and filter huge amounts of content in real-time. Online hate is automatically detected and dealt with depending on the context, while other content such as scams and fake links are also instantly filtered. Platform operators can choose from different levels of moderation to suit their needs, and easily customize their own community rules. And, by using contextual analysis Bodyguard can understand who moderated content is intended for and whether it’s genuinely toxic or not.

Bodyguard can also be integrated with social networks of all sizes. This way, platform operators can fulfil their duties as moderators while promoting free speech, fostering positive interactions, and boosting engagement. To find out more about how Bodyguard works, click here.

Popular Insights

What happened at VivaTech 2024
By The Bodyguard Team |May 29, 2024
Read more
Solutions
Threat MonitoringCommunity ProtectionAudience UnderstandingIntegrations
Helpful Links
SupportTechnical Documentation
About
CompanyCareersMedia KitContact us

© 2024 Bodyguard.ai — All rights reserved worldwide.

Terms & Conditions|Privacy Policy