Signals and Trends
When the second season of BBC’s mystery thriller series, The Capture, began airing in August 2022, the notion that a live broadcast could be intercepted with deepfake footage seemed somewhat implausible. Much of the conversation about deepfakes revolved around photographs and videos, which were easily debunked. There was the doctored video of a 'drunk' Nancy Pelosi, the artificial intelligence (AI)–generated image of former President Donald Trump resisting arrest and the deepfake footage purporting to show Ukrainian President Volodymyr Zelenskyy publicly capitulating to Russian demands.
The release of ChatGPT in November 2022, however, marked a shift in global awareness of the possibilities of AI and its attendant risks. Since 2022, other AI tools capable of generating codes, images, music, text and voice at an unprecedented speed and scale have been released to the public. There are early indications that AI tools will be deployed to interfere in elections on a scale beyond Russia’s social media disinformation campaigns targeted at the 2016 U.S. presidential elections.
Two days before Slovakia’s parliamentary election in September 2023, an audio recording of a purported discussion about election rigging between the leader of the election, Progressive Slovakia party and a journalist was posted to Facebook. The recording was immediately denounced by both parties and subsequently debunked by news agency AFP as a hoax created using AI and synthetic voice technology. While it is unclear if the video had any impact on the election, Progressive Slovakia lost to the opposition.
In October 2023, two alleged deepfake audio recordings were released online on the first day of the UK Labour party conference. One recording purported to capture the Labour leader abusing party staffers. The other, was of him purportedly criticising the city of Liverpool.
Already, fact–checkers are struggling to keep up with disinformation across social media platforms. X, formerly known as Twitter, dismissed its Election Integrity team and disabled a feature that let users report misinformation about elections in every jurisdiction but the European Union (EU). The Australian Electoral Commission reportedly struggled to get X to remove posts promoting disinformation ahead of the Voice referendum. Meta has gradually been distancing itself from news and political content in part due to laws requiring online giants to pay news publishers.
Public discourse
In a worldwide threat assessment in 2019, the U.S. Director of National Intelligence warned that U.S. adversaries and strategic competitors would attempt to use deepfakes or similar machine–learning technologies to augment influence campaigns directed against the U.S. and its allies and partners.
Four years later, a report by Google–owned cybersecurity firm Mandiant identified ‘numerous instances’ since 2019 in which AI–generated content had been used in politically motivated online influence campaigns from groups aligned with the governments of Russia, China, Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador and El Salvador.
With many countries gearing up for general elections in 2024, there are growing concerns about how generative AI tools combined with social media could be used to conduct manipulative information campaigns. Dame Wendy Hall, one of the world's leading computer scientists and member of the UK AI Council believes that AI's ability to damage democracy should be more of an immediate concern than any existential threat posed by the technology.
To address these concerns, some have suggested that news organisations need to consider reviewing their screening process to ensure undisclosed AI–generated content are not passed off as true. There is also pressure on tech giants to develop strategies to address generative AI and political content.
In the U.S., lawmakers have sought clarification from Meta and X about how their organisations are addressing AI– generated content in political advertisements hosted on their respective social media platforms. The request came after Google announced that from mid-November 2023, it will require all verified election advertisers to prominently disclose when their advertisements contain synthetic content that inauthentically depicts real or realistic–looking people or events. Meta and YouTube have since announced similar policies requiring disclosures for digitally created or altered content. Meta is also barring political advertisers from using its new generative AI tools in advertisements.
Separately, the U.S. Federal Election Commission has sought public comments on a petition asking the commission to clarify that the law against fraudulent misrepresentation applies to deceptive AI campaign advertisements. Some commissioners have nonetheless expressed scepticism about whether the commission has the authority to regulate AI advertisements.
In terms of the general risks posed by AI, leading AI companies in the U.S. have undertaken voluntary commitments to manage the risks including developing robust technical mechanisms (such as watermarking or provenance systems) to ensure that users know when content is AI–generated. As part of these commitments, Google launched SynthID — a tool for watermarking and identifying AI–generated images. SynthID uses an embedded watermarking technology that adds a digital watermark directly into the pixels of AI–generated images, making it imperceptible to the human eye but easily caught by a dedicated AI detection tool.
OpenAI is also developing a provenance classifier capable of identifying whether an image is generated by DALL·E 3. This comes after it withdrew its AI classifier for indicating AI–written text due to its low rate of accuracy.
The EU is currently pushing for tech companies to put a watermark on AI–generated content as part of moves to introduce an AI Act.
Queensland perspective
Queensland is due to have its state and local government elections in 2024. During the 2020 state elections, a deepfake video of the Premier, Annastacia Palaszczuk, supposedly giving a press conference was released by advocacy group Advance Australia. Whilst the video was evidently fake and captioned accordingly, the technology has since evolved with the risk of more realistic AI–generated videos released without a disclaimer.
Section 181 of the Electoral Act 1992 (Qld) requires that election materials must contain details of the person authorising the material. Section 185 of the Act further makes it an offence to publish false statements about a candidate or mislead voters during the election period for an election by way of print, publication, distribution or broadcast. This includes publishing on the internet, even if the internet site on which the publication is made is located outside Queensland.
Expert opinion
In an article about the interaction between current Australian law and political deepfakes, Andrew Ray, Visiting Fellow at the ANU College of Law, notes that copyright, tort and electoral law offer very limited protections to combat the threats posed by deepfakes. He proposes two targeted amendments to the Electoral Act to reduce the threat posed to elections —
(i) making it an offence to publish altered images containing false or misleading statements regarding electoral matter; and
(ii) imposing obligations on internet service providers/content hosts in relation to altered images.
Digital Policy Futures view
While it is likely an offence to publish AI–generated content aimed at misleading voters on the internet by virtue of the Electoral Act, the Act does not cover instances where a person has unwittingly shared such content believing them to be true. One approach to mitigating this risk is amending the Act to require labelling of AI–generated content in addition to existing disclosure obligations of advertising and campaign materials.
Alternatively, there could be a national requirement for automatic watermarking of AI–generated content. However, this measure by itself is insufficient to address the speed and scale of disinformation diffusion on the internet. To effectively tackle disinformation, consideration must be given to measures capable of stemming disinformation at the point of dissemination. This could be via automated decision systems that can immediately detect, flag and possibly remove AI–generated content at the point of upload.
It could also be via tools capable of authenticating real images.
Given the likelihood that technological advancements will make it increasingly difficult to identify manipulated media, it will be equally important to foster media literacy so that the public is empowered to engage critically with media in all aspects of life. This can be achieved by raising awareness about deepfakes and how to detect them, developing educational content about deepfakes, prioritising media literacy in school curriculum and teaching people about responsible sharing of digital content.
Closing thoughts
With many countries gearing up for general elections in 2024 and predictions that 90 per cent of online content will be AI–generated by 2026, there are growing concerns about how generative AI tools combined with social media could be used to conduct manipulative information campaigns.
Lawmakers in the U.S. and the EU are pushing tech companies to develop robust technical mechanisms to manage the risks posed by AI. Social media giants are also being asked to introduce specific measures targeted at AI–generated content in political advertisements hosted on their social media platforms. To effectively tackle disinformation, however, policymakers need to prioritise generalmedia literacy and consider measures capable of stemming disinformation at the point of dissemination.
As disinformation techniques continue to evolve, it will be important that measures targeted at managing the risks posed by AI–generated content are constantly reviewed and adjusted to reflect technological advancements.
Find out more about Futures and Foresight
- Last updated:
- 29 February 2024