Human moderation is still largely being used to counter hateful comments. However, this solution has its limitations: biases, not reactive enough, depressing for moderators, and very expensive,… Moderation as we know it today needs to evolve to be able to closely analyze interactions between users. Because of their number, it’s becoming difficult to moderate all the content on networking platforms, social networks, brands, associations, media, gaming platforms, blogs… GAFA companies use artificial intelligence (machine learning and deep learning) to detect some of the hateful content on their platforms and moderates them automatically. This technology serves as a filter but has many biases, not to mention the error rate and many false positives. This excessive yet limited moderation is being done to the detriment of users.