Please enable JavaScript to experience the full functionality of GMX.

AI could stop online arguments before they happen

Artificial Intelligence could be used to stop arguments online before they even happen.

Researchers at Cornell University, Google Jigsaw, and Wikimedia have teamed up to create software that scans conversations for verbal triggers and can predict if it will end amiably or aggressively.

The software was preprogrammed to look out for warning signs of conflict such as repeated, direct questioning, such as "Why is there no mention of this? And "Why didn't you look at that?"

Other triggers included the use of sentences that start with second person pronouns, like "Your sources don't matter". If these appear in the first reply it is a strong indication that someone is trying to make a matter personal.

In contrast 'friendly' conversations include liberal use of the words "Please" and "Thank you", greetings such as "How is your day going?" and safeguarding, such as the use of "I think".

The conversations were also monitored for their general "toxicity" using Google's Perspective API tool which tries to monitor how friendly, neutral or aggressive any given text is.

The research was tested on Wikipedia's "talk page", where editors discuss issues such as changes to phrasing in articles and the need for more accurate sources.

The software was tested by being given pairs of conversations that both started out amicably, but where one ended in personal insults.

At the end of the machine learning trial the software was able to predict which conversation would turn nasty just under 65 per cent of the time. Humans were better able to make a judgement, being correct 72 per cent of the time.

Justine Zhang, a PhD student at Cornell University in New York state who worked on the project told The Verge: "Humans have nagging suspicions when conversations will eventually go bad, and this [research] shows that it's feasible for us to make computers aware of those suspicions, too."

However, researchers adhered caution when it came to allowing machines to moderate conversations before they escalated into a row.

Zhang pointed out that "some disagreements are inherently useful," and if the software intervened too soon, it could "dissuade potentially constructive discussions."

She added: "There are cases of people managing to recover from bad conversations, so deciding when a machine should step in and mediate [is] an interesting question."

The study found that sometimes a conversation that appeared to be heading towards an argument could be pulled back from the brink thanks to people being firm and polite.

Sponsored Content