05.06.24 Big Tech

Deepfakes and their threat to global democracy

More than 2 billion voters in 50 countries are set to head to the polls in 2024. And with so many elections imminent, concern is growing among experts, politicians and the public about the potential threat posed by a rapidly evolving form of AI technology: deepfakes.

A deepfake is audio or visual content that uses generative AI to mimic a person’s likeness or voice. Faked videos and clips can then be circulated on social media.

Increasingly sophisticated generative AI has made deepfakes more difficult to spot. At the same time, broader access to AI tools has meant that more and more of this content is spreading across social platforms.

That matters because almost half of UK adults say they use social media as a source of news, according to recent research released by the UK regulator Ofcom.

Deepfakes are part of a larger climate of online misinformation that threatens to disrupt established democratic processes. Ahead of this week’s EU elections, a new investigation by the Bureau of Investigative Journalism (TBIJ) has revealed that thousands of scam adverts featuring AI-manipulated videos and false information about European politicians have been circulating on Facebook in recent months.

Deepfakes specifically are a growing issue – with audio clips (which can be harder to detect than fake videos) finding particular traction.

In November, Sadiq Khan, the mayor of London, was the subject of a sham audio clip in which he seemed to be making inflammatory remarks before Armistice Day, which he later said almost caused “serious disorder”. The previous month, an audio “recording” of Keir Starmer supposedly berating his staff went viral during the Labour party conference, receiving 1.6m views on X, formerly Twitter.

The day before April’s general elections in Pakistan, an audio clip spread on social media of the jailed former prime minister Imran Khan calling for an election boycott – which his party quickly denounced as a hoax. (Four days later Khan, who had been using AI-generated audio to rally his supporters from behind bars, “delivered” an official victory speech via deepfake technology.)

And the US presidential election, coming up in November, has also already been subject to similar interference. In February, an AI-generated phone message imitating President Biden urged Democrats in New Hampshire not to vote in the state’s primary.

The presence of increasingly realistic deepfakes means they can also be used to cast doubt on the authenticity of genuine footage. Earlier this year, Donald Trump falsely claimed that videos of his public slip-ups had been AI-generated.

Regulators are attempting to catch up with technology and last year the EU reached a provisional agreement on the world’s first regulatory AI act. The act will legislate for transparency so users know when they are consuming AI-generated content. In the UK, the Artificial Intelligence (Regulation) Bill is making its way through parliament.

Tech companies have also launched their own efforts to curb the spread of election misinformation on their platforms. This year, TikTok, X and Meta (which owns Facebook and Instagram) were among more than 20 companies to sign an accord pledging “to combat deceptive use of AI in 2024 elections”. The agreement commits them to developing ways to detect and label AI-generated content.

The urgency of these issues is only becoming more apparent: last month, the Guardian reported that Meta had approved a series of AI-manipulated political adverts during India’s election which spread disinformation and incited religious violence.

It is vital that regulators are able to keep pace with the fast-spreading strategies of deception and disruption. Whatever the outcome, it could carry huge implications for the future of global democracy.

Reporter: Billie Gay Jackson
Tech editor: Jasper Jackson
Deputy editor: Katie Mark
Editor: Franz Wild
Fact checker: Alex Hess

Our reporting on Big Tech is funded by Open Society Foundations. None of our funders have any influence over our editorial decisions or output.