As elections loom in more than 60 countries, representing approximately half of the global population, including key nations such as India, South Africa, Pakistan, Britain, Indonesia, the United States, and the European Union, 2024 is being characterized as a pivotal year for the state of democracy.
In this era of AI, 2024 is seen as a critical stress test for political systems worldwide, grappling with the challenges posed by new technologies that amplify the impact of disinformation. The first significant trial in navigating the onslaught of AI-fueled disinformation occurred recently in Taiwan. Despite a concerted disinformation campaign believed to be orchestrated by China, voters in Taiwan supported Lai Ching-te for president in a display of resilience against the influence of misleading information.
Beijing views Lai as a dangerous separatist due to his advocacy for Taiwan’s independence, leading to a surge of conspiracy theories and derogatory remarks about him on TikTok in the lead-up to the election.
An investigation by AFP Fact-Check uncovered that numerous such videos originated on Douyin, China’s equivalent of the app. The unfolding situation in other nations is uncertain, with generative AI posing a threat to exacerbate ongoing trends of polarization and diminishing trust in mainstream media.
Last year, the circulation of fabricated images, such as Donald Trump’s false arrest or Joe Biden’s fictitious announcement of a general mobilization to support Ukraine, showcased the remarkable progress of technology. The telltale signs of fakery, once evident in AI’s struggles with details like fingers, are rapidly diminishing, undermining detection mechanisms.
The stakes are substantial, as the World Economic Forum (WEF) identifies disinformation as the foremost threat over the next two years. The WEF warns that undermining the legitimacy of elections could lead to internal conflicts, terrorism, and, in extreme cases, state collapse.
Groups associated with Russia, China, and Iran are employing AI-powered disinformation to “shape and disrupt” elections in rival countries, according to the analysis group Recorded Future. The upcoming EU elections in June are likely to face campaigns targeting the bloc’s cohesion and support for Ukraine, echoing past operations like the “Doppelganger operation” in early 2022, linked to the Kremlin.
Repressive regimes might exploit the disinformation threat to justify increased censorship and other rights violations, as noted by the WEF. While states aim to combat this with legislation, progress is slow compared to the rapid advancements in AI.
Initiatives like the Digital India Act and the EU’s Digital Services Act intend to hold platforms accountable for addressing disinformation, but skepticism surrounds their enforcement capabilities. Both China and the EU are working on comprehensive AI laws, but their implementation will take time.
In response to mounting pressure, tech companies have introduced their own measures. Meta requires advertisers to disclose if their content utilizes generative AI, while Microsoft offers a tool for political candidates to authenticate their content with a digital watermark. However, these platforms increasingly rely on AI for verification, leading to concerns about the effectiveness of automated efforts against disinformation.