Deliberation Laboratory

With the Deliberation Laboratory (DeLab), we build a virtual moderator that uses artificial intelligence to recognize when discussions in social media are becoming increasingly destructive in character. In addition, the virtual moderator should also intervene in the debate to prevent escalation.

If one of the core elements of the European integration project is to create a single, deliberative, public sphere, then recent events suggest that project is set to fail. Far from a Europe-wide sphere of “public reason”, recent crisis becomes most evident on social media like Twitter: especially when it comes to matters of identity, users repeatedly attack each other in highly emotional terms, focussing on what divides people, not what unites them. With Deliberation Laboratory (DeLab), we develop a transformative online testing environment that allows us to explain the nature, causes, and consequences of citizens’ perceptions in deliberative public, online dialogue across languages. By developing a virtual moderator that can follow different cultural scripts, we are able to test the conditions under which citizens and groups evaluate what they see as trustworthy and believable in online communication. In particular, we focus on the Aristotle’s rhetorical triad – logos, ethos, pathos – as each of these elements have shown to shape citizens’ perceptions. With DeLab, we propose a conversational AI intervention system supporting constructive online engagement within the Tweet limits of 280 characters and beyond.

The problem

One of the many challenges in our society is a lack of interpersonal mutual understanding. When it comes to matters of identity, social media users repeatedly attack each other in highly emotional terms, always anxious to emphasize what divides rather than unites.

The current solution

Since human moderators are unable to oversee a total of, for instance, 500 million tweets a day, moderation mostly means "deletion". As a response to the increasing online incivility and verbal aggression, some news sites either make comments gated or completely remove them from their sites.

Our proposed solution

Imagine a virtual moderator enters the fray in real time. Imagine the virtual moderator can detect when things are getting destructive. Image the moderator intervenes by allowing users to go, for instance, through a systematized set of argument chains to make up users' minds on a certain issue.

Image
The project is funded for 4 years by the VolkswagenStiftung within their funding line Artificial Intelligence and the Society of the Future.

01 Aug 2021

Project DeLab Begins

The collaborative project DeLab starts. Our new website is online and we look forward to dig deeper into social media analyss.

01 Aug 2021

The current approach to controlling insults and hate messages in social media goes no further than deleting the messages. We therefore want to develop a virtual moderator that uses artificial intelligence to recognize when discussions in social media are becoming increasingly destructive in character. In addition, the virtual moderator should also intervene in the debate and help to prevent escalation.