Online hate, the main enemy of journalism

In a world where online toxicity is becoming increasingly common, journalists, especially women, often face harassment and threats.

Jean

Jean

Many fear for their safety and have even resorted to resigning from their positions to avoid it. They may become victimized while performing their job unprotected by an instantaneous and intelligent moderation solution.

Where has the issue of online hate against journalists come from?

Journalists have a difficult time doing their jobs when their right to free speech drives them to live in fear. Not only do some journalists work in dangerous physical situations, such as wars and riots, but they also face threats to their safety from the content they post online. 

There have been examples in Canada this year where three female journalists, in particular, Rachel Gilmore, Erica Ifill and Saba Eitizaz, were the recipients of racist and sexist messages after writing articles and posting them on social media.

This online hate appears to be an excuse for toxic behaviour that is far more serious than just a few comments online. Journalist Fatima Syed has been threatened with rape and death after writing about ethnic issues reported CTV news. It seems the perpetrators of the hate have no filters or shame about what they are posting. Their role is to give information to an audience, not open up doors to harassment.

What are the consequences of online hate?

When you go to work, you expect to be treated with respect by colleagues and people you come into contact with. No one should have to deal with insults and death threats when following their career.

The results can range from anxiety and depression, to unwillingly changing jobs because both men and women journalists are concerned about their safety. Online platforms may seem to be against their work, thus affecting the delivery of trustworthy information about current events. 

If a female journalist is afraid of the outcome of an article she posts, she may not feel able to voice her opinion for fear of becoming a victim of online hate while doing her job. This potential intimidation leads to self-censorship, which does not allow for freedom of expression. 

Protecting journalists: who is responsible?

The aggression and intimidation faced by journalists can be addressed with the collaboration of media outlets, from publishing organisations to social media platforms. As employers, they have the responsibility to keep journalists safe. However, many journalists are freelancers, giving them even less protection against these attacks.

Real-time content moderation is an answer to this problem. Whether a journalist is self-employed or works for a company, content can be moderated using a solution such as Bodyguard.ai, which moderates toxic content before it goes live and acts in favour of free speech. 

The tool uses AI and the expertise of the linguistic team to detect, analyse and moderate hateful comments and threats by taking into account the context before they reach their target audience. This works by classifying toxic messages into parameters set by the client and removing them before they appear on a platform.

Not only does moderation mean racial, homophobic and misogynistic comments are removed from platforms before they reach the journalists, but readers never see them either. This protects both the journalist and the identity of the publishing outlet and even the topic itself when it comes to sensitive interviews and subjects.

Bodyguard.ai moderation protects journalists and the organisations they work with, allowing them to carry out their roles effectively without having to censor their own content. This allows them freedom of speech and the power to write the truth without there being intimidating consequences. 

Find out how Bodyguard.ai can protect the valuable work of journalists so they can report without being attacked.