Introducing Post Scoring from Bodyguard.ai
Innovation is one of Bodyguard.ai’s core values. We show it by continuously improving our solution, and constantly evaluating how we can offer something more to our customers.
The Bodyguard.ai team
We seized the potential of Artificial Intelligence to help create a safer, healthier and more positive online space for people and businesses. Now, we’re making our moderation even more valuable for users, by leveraging one of the freshest technologies around: Large Language Models (LLMs).
We’ve used an LLM to create the newest feature of Bodyguard.ai: Post Scoring. Post Scoring takes the functionality and value of Bodyguard.ai to the next level, offering our customers much more than moderation. In fact, it has the power to change the way they think about their social media completely.
What is Post Scoring?
Post Scoring lets users obtain a predictive toxicity score for any content they plan to post on social media, based on LLM technology. It couldn’t be easier to use, but it makes a serious difference to how our customers can approach their social media.
Using LLM technology to analyse and understand the content, Post Scoring assigns a score to each social media post, so that users can anticipate the kind of response the post is likely to receive. The post content is analysed, and a score is instantly generated based on key criteria, including keywords, topics, entities and celebrity mentions. The score indicates how ‘risky’ a post is in terms of attracting negative attention, criticism, hateful comments, scam or spam messages. Post Scoring works for multiple languages, in addition to English.
The process is easy:
The user goes to the Posts page within the Bodyguard.ai dashboard
The user enters the suggested content into the box and hits ‘Score my Post’
A score is instantly generated which estimates how risky the content is
There are three possible scores: Low, Medium and High
Based on the post’s predictive score, users can decide whether to make changes to the content and improve its score (decreasing the risk of toxicity) or go ahead and post the content as it is.
LLM: the technology behind Post Scoring
One of many branches of Artificial Intelligence, LLM is the current buzzword in tech.
LLMs are machine learning models which are trained to understand natural language using large amounts of data and text (hence the name). Over time, they are able to understand, analyze, translate, predict and generate content themselves, making them a really useful model for a variety of functions. ChatGPT is one of the best known examples of an LLM in action.
When it comes to content moderation, LLMs ensure that text is accurately interpreted, so that the appropriate action can be taken on a post, for example, removing a comment it recognises as hateful.
Put simply, LLM is no-nonsense, proven technology which makes a tangible difference to the effectiveness of content moderation.
What are the benefits of Post Scoring?
Post Scoring empowers users to make better decisions when it comes to their social media and brings value in two distinct ways:
Pre-scoring: users can explore content creation and test different content for their posts, adjusting them thanks to the risk score they get. The score is transparent, and gives reasons why particular content could trigger intense reactions from audiences.
Post-scoring: once a post has been shared on social media, the risk score generated by Bodyguard.ai is displayed by the post within the dashboard, alongside data on the amount of toxicity the post actually received. This is an additional indicator for users which helps to deepen their understanding of their audience by assessing reactions in response to content, and allowing them to tailor content accordingly going forward, to fit audience behaviour.
Post Scoring shows additional topics that might be associated with the content being posted. This helps users understand further why their post has been given a certain score, and why it might be considered risky, even if the original content seems innocuous.
Take Post Scoring even further, with Alerting from Bodyguard.ai
To enhance the effectiveness of Post Scoring even further, users can combine the feature with our Alerting functionality, which generates email alerts of unusual peaks in commenting activity, and makes the user aware of the behaviour, including:
Higher volume of reactions
Higher toxicity rate on a post
Higher positivity rate
With Post Scoring and Alerting activated, users can be confident they have a robust safety net which allows them to avoid, anticipate and respond to toxicity on their social media. Combined with Bodyguard.ai’s rigorous moderation, users will know they are benefitting from the most comprehensive and effective content moderation available.
Post Scoring is available as part of Bodyguard.ai’s Advanced Package. Whether you’re newly discovering Bodyguard.ai and want to take control of your social media moderation for the first time, or you’re an existing customer who wants to maximise the power of your moderation, we’re here to help. Talk to us about your moderation needs today and let's get started!