top of page

Online hate and harassment: Can technology solve online abuse?

Governments around the world are increasingly concerned by the prevalence, spread and impact of harmful online content, such as harassment, bullying and hate speech. Online abuse poses myriad concerns: it can inflict harm on targeted victims, pollute civic discourse, make online environments unsafe, create and exacerbate social divisions, and erode trust in the host platforms.

Many hope that increasingly sophisticated and powerful algorithms will ‘solve’ the problem of online abuse by making this content easier to detect and take down. However, abusive content detection has proven to be a wicked challenge. Not only is it a very difficult engineering task; it is also imbued with complex legal, social and political challenges. Researchers are increasingly drawing attention to the biases in some widely used tools and datasets, raising concerns that they might perpetuate the injustices they are designed to overcome.

The Alan Turing Institute

This is an ongoing project to collate and organise resources for research and policymaking on online hate. These resources aim to cover all aspects of research, policymaking, the law and civil society activism to monitor, understand and counter online hate. Some of the resources may cross into closely related areas, such as offline hate, online harassment and online extremism. Resources are focused on the UK but include international work as well.

This is an interesting document from 2019 detailing research undertaken to determine;

How much online abuse is there?

Download PDF • 3.01MB


bottom of page