Earlier this year, thousands of Democratic voters in New Hampshire received a call urging them to stay home instead of voting. The call was purportedly made by President Joe Biden. However, investigations have revealed that the call was a deepfake, a form of synthetic media created using artificial intelligence (AI) to appear real. This incident highlights the growing threat that deepfakes pose to the democratic process, not only in the UK’s current election but also in the upcoming US election.
Deepfake adverts impersonating Prime Minister Rishi Sunak have already surfaced, and political activists are recommending fake videos in key election battlegrounds. The proliferation of deepfakes can be attributed to their low cost and ease of creation, requiring no prior knowledge of AI. Paid advertising and other platforms are used to propagate deepfakes and spread misinformation, leading to the erosion of trust in the political process.
Efforts to combat deepfakes are underway, with some countries considering penalties for their creation and dissemination. Tech giants such as Google and Meta have implemented regulations requiring politicians to disclose the use of AI in election adverts. Additionally, major tech companies, including OpenAI, Amazon, and Google, are collaborating to develop technology that can detect and counter deepfakes.
However, there are challenges in addressing this issue. The lack of a standard watermark and the ability to remove watermarks easily make tracking deepfakes difficult. Moreover, deepfakes can be shared through various means, including email and encrypted messaging apps, bypassing platform regulations.
To protect democracies from the threat of AI deepfakes, a responsible AI mechanism is needed. This mechanism would detect and remove audio and video deepfakes at their inception, similar to a spam filter. A coalition of leading technology companies, including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, has pledged to work together to combat harmful AI content.
While these efforts are commendable, more needs to be done. Responsible AI solutions should go beyond identification and elimination of deepfakes and focus on tracing their origins, ensuring transparency, and fostering trust in the news users consume. With the UK and US elections approaching, urgent action is required to develop and deploy effective countermeasures against political deepfakes.
Without effective regulations and responsible AI technology, the integrity of information is compromised, and the adage “seeing is believing” no longer holds true. Voters must exercise caution when encountering any political advertisement, text, speech, audio, or video to avoid falling victim to deepfakes that aim to undermine democracy.