How to better moderate comments on Twitch

Twitch is valuable and appreciated because it allows proximity that cannot be achieved or reproduced via traditional media like TV. Want to learn why and how to moderate live comments on Twitch? Check out this blog post!

Arnaud

Arnaud

Twitch is one of the most popular sites for the gaming community on the internet. This live-streaming platform features a wide range of videos showing competition gaming in real-time. Twitch also contains live streams from non-professional gamers and video game reviews. Although its primary focus is on gaming, Twitch has other content covering music, sports and creative hobbies in the Just Chatting section.

While the platform does go to some lengths to keep violent or sexual content from being shown, comments on live streams often contain language that is inappropriate and potentially harmful to people.  

Twitch is valuable and appreciated because it allows proximity that cannot be achieved or reproduced via traditional media like TV. In order to keep the chat healthy, content moderation is required to protect the community, the streamer's online reputation and partnerships relations.

Though, effective content moderation can be difficult to achieve in a fast-paced environment. 

The wild world of live video game streaming 

Since it began back in 2011, Twitch has gained traction to become a hugely popular website within the online gaming community. Acquired by Amazon, Twitch is now the biggest and most influential gaming platform, hosting over 100,000 live streams from over 8 million streamers with more than 31 million users visiting the site daily. 

The average age of a Twitch user is between 16 to 35 with a trend towards the younger end of the scale. 65% of Twitch users are male, while only 35% are female. There is a strong belief in the absoluteness of the right to free speech within the gaming community, even if what is being said is offensive or hurtful. These beliefs coupled with the fiercely competitive nature of online gaming often result in disruptive online behaviour. 

Often, people within the highly exclusive gaming world comment on streams in abusive, racist or sexually inappropriate ways. These comments can be directed to other users or to the streamers or players themselves. 

Since these comments happen instantly in real-time, it is incredibly difficult for moderators to provide protection for the Twitch community against online hate speech, insults and harassment.

Nowadays, getting the right moderation solution that can moderate comments in real-time is vital. Simply having human moderators is not a workable solution as it cannot be scaled to cope with the most popular events or streams. Moderating a live chat is of crucial importance to protect online communities from online hate. Protection is for the whole community, not just the people interacting in the chat. 

With an automated moderation system, human moderators will be able to better protect the online community. They will also have more time to focus on core tasks such as engaging with the audience via Twitch polls and other functionalities.   

Moderating Twitch comments in real-time

One of the most popular games streamed on Twitch Riot Game’s League of Legends (LoL). Live streaming of LoL games has accumulated over 48 billion view hours. Live chats during these live streams can often become polluted with insults, spam, sexist and racist abuses. Popular female LoL player Vicksy recently made public her experience of sexist abuse during a LoL Twitch Rivals tournament. 

If brands and streamers do not protect their channels, their online reputation, relations with partners and their community will be affected by online toxicity.

Everyone should be able to enjoy live streams without being subject to online hate or having to witness it. Events such as the League of Legends World Championship attract hundreds of thousands of viewers and commentators. Unfortunately, not all of those involved in proceedings will be civil or polite to others.

To protect the entire gaming community from viewing or being subject to hate speech, moderators must be able to locate and remove toxic comments and ban abusive users as soon as they can. This task is daunting enough considering the huge amounts of comments and viewers. The use of emoticons and emojis also adds to the difficulty of moderating live-stream events. 

So, how can moderators protect the online community from being victims of online hate speech or viewing toxic behaviour while also protecting free speech in real-time? One solution is to use AI to combat toxic online comments. 

The Bodyguard.ai solution meets all necessary criteria. Our technology is the perfect alliance between linguistic experts and artificial intelligence and ensures automatic, intelligent, real-time moderation. It is capable of contextualising every comment, grasping linguistic subtleties and determining the toxicity of a comment. 

Bodyguard.ai facilitates the work of moderation teams, helps to keep discussions healthy in chats and protects the community. Bodyguard.ai also provides useful information for streamers to give them a better understanding of its community. For example, streamers can learn more about the most engaged and the most positive subscribers. This provides streamers with a clearer view of their audience. A streamer can even reward or thank loyal viewers individually.

     

Through its innovative use of machine learning technology, Bodyguard.ai delivers a moderation solution that allows users to freely express themselves without the risk of being subject to online toxicity. 

Bodyguard.ai uses contextual analysis to make sure free speech is protected. It does this by knowing linguistic gaming codes with emojis, spam of all genres, and emotes-only chats. Bodyguard.ai allows businesses to moderate in real-time and assists human moderators in their work.

Discover more about how Bodyguard.ai is working towards building safer gaming communities by visiting Bodyguard.ai today.