A Tool for Hate Speech Detection on the Internet is Developed
As the number of online hate comments increases, artificial intelligence comes to the aid: researchers from Vytautas Magnus University (VMU), together with the Lithuanian Human Rights Center, European Foundation of Human Rights and the Department of National Minorities under the Government of the Republic of Lithuania, developed a tool intended to help automatically detect and remove cases of hate speech, for instance, in the comment sections of news portals. According to one of the project “#Be Hate-Free: Building Hate-Free Communities in Lithuania” developers, Professor Tomas Krilavičius, Dean of the VMU Faculty of Informatics, the development of such a tool is quite challenging: people themselves often find it difficult to recognize hate speech, and artificial intelligence needs to be thoroughly trained to do so.
“While working with social media texts and comments at the university, we noticed a lot of negative things, incitement to hatred on the Internet. However, identifying hate speech is difficult for artificial intelligence: good and effective solutions in this area have not yet been proposed for any language since the definition of hate speech itself requires careful consideration of the context. Moreover, there is no clear legal regulation, and people themselves sometimes disagree on what counts as hate speech and what does not,” says the professor.
He also notes that hate speech is not a comment or insult directed against a specific person, but rather an attack on weaker groups of society, such as national minorities or LGBT communities, and threats against them. “In other words, if you write something mean only about me, it’s not going to count as hate speech. But if certain vulnerable groups are attacked and threatened with violence, then it will count as hate speech,” the researcher explains. According to Krilavičius, the tool for hate speech detection developed by the team of the VMU Faculty of Informatics together with partners uses artificial intelligence and language technologies to assess the likelihood that a particular text is hate speech.
“We used many examples, some of which were marked as hate speech, and some of which were not. Such artificial intelligence solutions learn from various examples and word combinations. Employing more sophisticated methods allows to detect that some word may be used instead of another similar word in a similar context. We reviewed many methods and experimented with a small data set before trying a larger one,” says Professor Krilavičius.
Experiments enabled researchers to create initial hate speech detection model which was then tested to evaluate its performance. The results were promising for further analysis. This led to the development of a tool for hate speech recognition functioning in a real operating environment. In addition, a methodology for hate speech detection was also developed.
Dean of the Faculty of Informatics, professor Tomas Krilavičius
The solution was improved by using sample texts that artificial intelligence later made use of when assessing other cases. The focus was on short texts, such as messages, comments, and posts on social media. The tool is currently prepared for demonstration and application. During the project, experts in artificial intelligence, technology, linguistics, law and other fields were collaborating to develop the solution.
While the tool is expected to be very useful in combating hate speech on the Internet, professor maintains that artificial intelligence in such solutions only assists people and will not replace them for a long time to come, as understanding context and having additional knowledge such as the ability to recognize sarcasm is crucial. Thus, in some cases, it will be necessary for a person to assess possible manifestations of hate speech.
“The tool will indicate in percentage terms the probability that the text analyzed is hate speech. For example, if the probability reaches 70 percent, the comment will be automatically blocked, and if it reaches only 50 percent, it will be published. However, if the probability is between 50 and 70 percent, then the comment will be temporarily blocked and handed over to people for verification,” VMU professor describes the principle of the functioning of the solution.
The main challenges in developing such a tool include not only the fact that there is a lot of debate about the definition of hate speech, but also the fact that there is an insufficient number of examples from which artificial intelligence can learn. “We don’t have enough corpora and annotated examples, and preparing them takes a lot of work. Furthermore, training artificial intelligence models requires a considerable amount of computational resources,” says Professor Krilavičius.
Developing the tool was also based on foreign research and solutions, mainly for English language. However, as can be seen from the comments on Facebook and other platforms, there are currently no solutions that could effectively identify and remove hate speech. The new tool is expected to contribute to progress in this area.
The solution could be particularly useful for news portals that wish to keep their comment sections clean. Taking this into account, testing and use of the tool is currently being discussed with media channels. In addition to developing the solution, researchers are also encouraging discussions on hate speech to help understand the causes and origins of this phenomenon.