Targeting the truth: artificial intelligence, election campaigns and democracy

Zum Beitrag über KI und Demokratie

While more than 60 countries recently declared their support for the “ethical” development of artificial intelligence in Paris, elections are being held in Germany and the influence of other countries and foreign actors on the federal elections is being hotly debated. Deepfakes in particular, i.e. realistic-looking but fake videos, photos or audio recordings, could pose a threat to this election campaign. But are the measures taken by politicians against disinformation and fake news not enough? What role do they play in the election campaign and what influence do they have on our democracy? And how can deepfakes even be recognized? Can artificial intelligence (AI) not only contribute to the spread of disinformation, but also help to combat it?

We spoke to two experts: Prof. Dr. Thorsten Strufe, Professor of Privacy and Network Security at the Karlsruhe Institute of Technology (KIT), and Isabel Bezzaoui, who is working on the development of an explainable AI for detecting disinformation in the research project “Disinformation Campaigns through Disclosure of Factors and Styles (DeFaktS)” at the FZI | Research Center for Information Technology. They explain how we can better arm ourselves against digital deception – and what role AI plays in this.

A conscious approach to disinformation is particularly important now

A conscious approach to disinformation is crucial, especially with a view to the Bundestag elections on February 23, 2025. “Elections are a particularly sensitive moment in which fake news and deepfakes are used in a targeted manner to influence voters or cause confusion,” warns Strufe. It is therefore all the more important that we learn how to deal with these challenges.

But how can we arm ourselves against this targeted manipulation? According to Bezzaoui, one of the most important measures is media literacy education to sensitize people to influence and manipulation. But that alone is not enough. “Platforms profit economically from emotionalizing and polarizing content, which unfortunately also includes disinformation,” she explains. Regulatory measures are therefore needed to put pressure on social media platforms such as Facebook, Instagram, X and Co. to improve their content moderation.

AI as a threat to democracy?

Bezzaoui sees the greatest danger in the ability to manipulate and falsify content in order to influence opinion and ultimately voting behavior. “Generative AI makes it much easier to create manipulative content and demands a new level of critical thinking and digital skills from citizens,” says Bezzaoui. However, these skills must first be developed – and this is one of the biggest challenges facing our society.

It is not only the quality of disinformation that is problematic, but also the sheer quantity of it. Strufe explains: “It’s not just about creating the perfect deepfake. Rather, the aim is to ensure that similar content is shared again and again in order to achieve normalization.”

Deepfakes: the perfect deception or not?

Deepfakes – videos or audio files that look deceptively real – are a particularly sophisticated form of digital manipulation. “The software used to create deepfakes is getting better and better. Of course, this makes it much more difficult to detect them,” explains Strufe. It used to be easy to find mistakes – for example, if someone suddenly had six fingers or the background of the video was wrong. But these obvious signs are increasingly disappearing.

Examples of known deepfakes:

  1. Obama Deepfake (2018): A fake video of the former US president making statements he never made. This was used as an example to draw attention to the dangers of deepfakes.
  2. Pope in a designer coat (2023): An AI-generated image of the Pope in a white Balenciaga coat went viral and was thought by many to be real.
  3. Selenskyj fake (2022): A manipulated video showed Ukrainian President Volodymyr Zelenskyi allegedly calling for surrender – a targeted disinformation campaign.

Are there still ways to expose deepfakes? Yes, says Strufe: “Sometimes you can tell by small details such as illogical eyelashes or a background that just doesn’t fit.” Nevertheless, he warns: “Most deepfakes circulating on the internet are not perfect. They are not really designed to deceive the viewer, but often serve other purposes – such as targeted influence through mass and repetition.”

Checklist: How to recognize disinformation and deepfakes

Noticeable picture or sound errors: Distortions, unnatural movements or incorrect shadows
Unusual sources: Is the source trustworthy? Does the same news exist in reputable media?
Checking facts: Using fact-checking tools such as Mimikama, CORRECTIV, dpa-Faktencheck or BAIT: fact-checking channel for young people on TikTok
Emotional language: Extremely polarizing or alarming content could be a warning sign
Suddenly viral content: If a video or image that goes viral quickly without context, it’s worth checking the origin on search engines

The Federal Office for Information Security provides a lot of background information on the technical aspects of deepfakes and shows possible detection features.

Sometimes no detective work: How AI can help detect disinformation

In view of the flood of disinformation, manual fact-checking is hardly feasible. “However, AI can provide support here,” explains Isabel Bezzaoui. This is known from large platforms that have already implemented corresponding systems to classify content automatically. However, these are not free to use, nor is it accessible how they work.

Bezzaoui’s team in the DeFaktS project has developed a taxonomy with the help of several research assistants at the FZI in order to understand what constitutes disinformation and which linguistic indicators point to it. On this basis, an AI learns to distinguish between factual information and disinformation.

You are currently viewing a placeholder content from Default. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

What counts in the end is attentive observation

In other words, AI systems should be used in such a way that they serve the collective good of society. Nevertheless, such AI is neither a patent solution for everything nor a panacea for what we encounter and will encounter in the fight against disinformation. “Humans and machines should work together here to effectively counter disinformation campaigns,” emphasizes Bezzaoui. One possible solution is apps or browser extensions that alert users to potential disinformation and help them to better categorize content.

Although AI can be a valuable support in the fight against disinformation, the most important defense remains a critical approach to digital content. Media literacy, research and regulatory measures are essential to curb the spread of fake news and deepfakes. Ultimately, it is up to all of us to take a closer look, question sources and realize that not everything we see online is true.