AI-powered disinformation a threat to democracy and public trust

0

In the wake of increasingly sophisticated AI – or ‘superhuman AI’ – technologists, policymakers, and the public must collaborate in developing strategies to mitigate the spread of misinformation and disinformation.

In today’s digital age, the proliferation of misinformation (the dissemination of false information without the intent to deceive) and disinformation (the dissemination of false information with the intent to deceive) presents one of the most significant challenges to societal cohesion at a global level. The advent of artificial intelligence (AI) has transformed and supercharged the information dissemination landscape, introducing both threats and opportunities.

Romania and Poland each reported increased Russian disinformation activity ahead of their presidential elections, with authorities warning the Russian-backed network Doppelgänger is actively attempting to influence voters.

In the run-up to the 2025 German federal elections, Doppelgänger was also implicated with setting up over 100 pseudo-news websites, many of them mimicking mainstream media outlets. Their disinformation campaign focused on posting AI-generated articles, many of which undermined Germany’s support for Ukraine and promoted the far-right Alternative for Germany (AfD) political party. Leveraging an army of bots on platforms such as X, Doppelgänger flooded social media with thousands of posts, creating an illusion of viral traction.

According to Will Ashford-Brown, Director of Strategic Insights at Heligan Group, the integration of Superhuman AI into disinformation campaigns has far-reaching consequences.

“We are starting to see the erosion of public trust in traditional media and other formerly trusted institutions. This depletion in trust risks increased cynicism, societal polarisation and the echo-chambers that fuel the angry and disenchanted, which has serious implications. When individuals are unable to discern truth from falsehood, it undermines the foundation of informed citizenship.

“We have seen the effects of AI-generated disinformation interfering with democratic processes by spreading false narratives about candidates or policies that have led to the influencing of election outcomes and ultimately an undermining of public confidence in governance. There are financial implications too, false information can also manipulate financial markets, damage corporate reputations, and facilitate sophisticated scams, leading to significant global economic losses.

“Addressing the challenges posed by AI-enhanced misinformation and disinformation requires a multifaceted approach. Developing and deploying AI tools capable of detecting and flagging deceptive content is crucial and should be a key investment area for those in this sector. But it’s not all about investment, there is a wider role for Governments and international bodies, to establish frameworks to regulate the use of AI in information dissemination, ensuring accountability and ethical standards are upheld.”

Enhancing media literacy among the general public, as trust in these media institutions shifts, is important to empower individuals to critically assess information sources and recognise potential disinformation, hopefully reducing its impact.

“The rise of Superhuman AI has undeniably transformed the information landscape, presenting both opportunities and significant risks. As AI continues to evolve, it is imperative for all of us, including technologists, policymakers, and the public, to collaborate in developing strategies to mitigate the spread of misinformation and disinformation. By fostering a culture of critical thinking and implementing robust safeguards, we can all harness the benefits of AI while critically safeguarding the integrity of information and ultimately our sense of truth,” concluded Ashford-Brown.

Share.

Comments are closed.