Customized content moderation: one size doesn’t fit all

The strength of a moderation solution comes from its suitability to your organization and your needs. Being able to customize the solution means you can tailor it to match those needs exactly.

Bastien

Bastien

The importance of customized content moderation

If you go to the shop to buy a pair of trousers, you can’t buy just anything - you’ll need to find the right size and cut at least, maybe even a colour that works better. Well, choosing a moderation solution for your platforms is similar - one size doesn’t fit all. Anyone telling you otherwise will most likely not have a customizable solution.

Why is it important to have a solution that can be adapted? Simply put, because the more you can tailor your moderation to the needs of your organization, the more accurate your results will be, and the less problems you’ll see.

There are two risks when it comes to toxic content moderation, which sit at opposite ends of the spectrum, and they are over- and under-moderation. Both can be very dangerous for platforms and communities:

  • over-moderating can translate to censorship and user disengagement

  • under-moderating can imply self-censorship and user loss.

These two risks can be mitigated by achieving the right level of moderation, which is where customization comes into play.

A standard, out-of-the-box solution will apply the same moderation criteria no matter the end user and their specific needs. A gaming platform would be moderated the same way a Christian forum would - even if these two platforms would not have the same moderation standards.

What dictates your moderation needs

To decide what kind of and how much moderation your organization needs, the points to refer to are:

  • Industry specifics: moderation needs vary from industry to industry. Gaming platforms, for example, will see a more permissive moderation regarding low-level insults or trolling, compared to media websites who will need to be stricter.

  • Audience age and characteristics: needless to say that the lower the audience age, the stricter the moderation rules should be, making sure that absolutely anything potentially harmful is removed proactively. Similarly, audiences where religion or gender can subject members to online abuse need to be protected against racism, LGTBQ+phobia, sexism and others.

  • Content topics: the more sensitive or controversial the content, the higher the risk for toxicity. When publishing content that can attract hateful comments, it’s important to be prepared and protected in advance. This allows the content to still be posted and can encourage debate while eliminating the online hate that can sometimes accompany these exchanges of ideas.

Customizable vs. non-customizable solutions

Looking at the content moderation solutions available on the market it’s worth noting that customization options will vary depending on the type of solution.

Manual moderation can be adapted to suit the needs of a platform or community and to follow brand guidelines. The caveat here is that the adaptability will depend on having a clear set of guidelines that content moderators follow closely.

Machine-Learning solutions will offer some customization options but can only go as far as the technology allows. This means that as they rely on keyword detection, they will lack contextual understanding and smart analysis capabilities, and won’t be able to offer a fine-tuned moderation in the same way an autonomous solution would.

Last but not least, autonomous and intelligent solutions like Bodyguard offer the most advanced customization options on the market. They are able to adapt to different industries, audiences and content topics for a fully reliable moderation.

Bodyguard’s customization options explained

Custom module undefined not implemented

Toxic content categories (classifications)

Bodyguard’s technology analyzes content across a range of ‘toxic’ categories:

  • Insults

  • Hatred

  • Trolling

  • Moral harassment

  • Sexual harassment

  • Racism

  • Body-shaming

  • LGBTQ+phobia

  • Misogyny

  • Threats.

Depending on the individual needs of the platform or community, each organization can choose what kind of toxic content categories it wishes to moderate - from as little as one category to all available categories.

Adjustable moderation levels

For each toxic content category, the moderation level is adjustable - from low, to medium, strict, and very strict. Depending on the level that is selected, the moderation will be more or less permissive. This is critical for avoiding both over- and under-moderation, which is key to keeping users happy, encouraging engagement, and attracting new users and partners.

What does this look like?

For example, gaming platforms and communities will need the moderation to be more permissive. This is especially relevant for threats, insults and trolling which are contextualized into the gaming environment and specific language:

an ‘*I’m going to kill you*’ that would be removed on other platforms won’t be removed on a gaming platform since it is part of the game.

The moderation level adjustment equally allows organizations to benefit from strict moderation where necessary, which would be the case for racism, sexual harassment and LGBTQ+phobia.

Custom module undefined not implemented

Who to protect

Within a platform or a community, everyone doesn’t necessarily have the same need for protection. That’s why Bodyguard’s technology distinguishes between two types of protection:

  • Individual protection - which moderates any toxic content directed at the the account holder (your organization)

E.g. “You are a bunch of douchebags

  • General protection - which moderates any toxic content, no matter if it’s destined for the account holder, for someone in the community, or for people and groups of people outside the community

E.g. “White people should die” or “UserXYZ you are an idiot

Removing noise - ads, spam and scams

Bodyguard can detect ads, spam and scams which are classed as “noise”. As a platform or community owner, you might not want your comment sections to be flooded with noise such as “Subscribe to my channel now” or “GET PAID OWN THE NETWORK! JOIN NOW FOR FREE!”. This is why Bodyguard has the option to moderate anything found to be ads, spam and scams through a simple toggle switch.

Personalized recommendations

At Bodyguard, we work with our customers to co-create the right moderation setup. We first make sure there is a clear set of community guidelines that set out what the organization deems acceptable and what needs to be prevented, before ensuring that those guidelines are reflected in the moderation rules. To go even further, we constantly check in with customers to confirm that they are getting the right level of moderation, and adjust according to their needs when necessary.

Ready to find out how this would work for your organization? Book a short demo here.