Does content moderation rhyme with censorship?

Content moderation is a huge responsibility. If you run any kind of social platform or online community, users are relying on you to protect them from exposure to online toxicity, which in its worst forms can cause anxiety, depression, stress disorders, and even substance abuse.

The Bodyguard team

The Bodyguard team

But moderating harmful content can occasionally come into conflict with freedom of speech, which permits the expression of almost any kind of view. What one person considers worthy of expression can easily be considered hateful and worthy of moderation by another. Can content moderation, therefore, be considered a form of censorship? In this article, we’ll explain how moderation and free speech can in fact coexist, and why moderation is a crucial aspect of a functioning online community.

With Elon Musk’s recent purchase of Twitter, content moderation and the limits of free speech are back in the headlines. Critics of the purchase claim that Musk’s laissez-faire attitude to free speech will increase hate speech, fake news, and other toxic content. Proponents argue that Twitter and other social platforms have too often crossed the line from moderation to censorship and that Musk will rectify this.

Privately-owned social media platforms’ specificities 

A useful starting point in the moderation vs. free speech debate is to note that privately-owned social media platforms and networks have the right to set their own guidelines and moderation rules on what constitutes acceptable content. They aren’t legally bound to host anything that they deem to be inappropriate for their users. They also have a duty to filter out illegal content and remove disinformation. Because of these factors, platforms may seem to be engaging in censorship and inhibiting free speech, but in reality, they’re operating well within the law. On top of this, it’s important to note that other forms of moderation (e.g. fact-checking) shouldn’t be considered censorship because they don’t involve prohibiting speech.

The main difference between free speech and hate speech

There’s also a key distinction to be made between free speech and hate speech. The definition of free speech can generally be understood as a fundamental human right that allows you to express almost any view, even those that may offend others. But the right to free speech doesn’t extend to hate speech, which is generally defined as an offensive statement designed to deliberately incite violence or hatred against another person or group’s race, religious belief, gender, sexual orientation, or other characteristics. It’s not always easy to say where expressing controversial views ends and hate speech begins, but factors in determining the difference often include intent, the context of expression, the intended audience, and the particular words chosen. For this reason, efforts to limit hate speech on social platforms can sometimes look like efforts to censor particular views. 

Smart moderation at the service of user experience

Another point to consider is that moderation often plays a vital role in creating a positive user experience and encouraging free speech rather than damaging it. Users are more likely to engage with and share viewpoints if they feel respected and aren’t afraid of a backlash.

A viable solution to encourage free speech while protecting users through moderation comes in the form of smart moderation. This means using software to automatically filter out content that is either harmful or not conducive to a healthy debate.      

At Bodyguard, our solution uses artificial intelligence to analyze and filter huge amounts of content in real-time. Online hate is automatically detected and dealt with depending on the context, while other content such as scams or fake links is also instantly filtered. Platform operators can choose from five levels of moderation and easily customize their own community rules. And, with our ‘DirectedAt’ technology, it’s easy to tell who moderated content is intended for and whether it’s genuinely toxic or not.

Bodyguard can also be integrated with social networks of all sizes. This way, platform operators can fulfill their duties as moderators while promoting free speech, fostering positive interactions, and boosting engagement. To find out more about our services and how we can accurately monitor content without straying into censorship, visit our site here.