Generative AI is already having a profound and lasting effect on the advertising industry and political advertising is by no means exempt. My outlook about its impact on “normal” advertising is positive. The possibilities are exciting. But I am filled with total dread about what’s to come as this technology permeates our politics.
AI can help professionals working in normal advertising develop more personalised, more attention-grabbing, (read: more effective) advertising campaigns, at a faster rate and at a lower cost.
There are some ethical issues to worry about relating to AI’s influence in normal advertising. The top three concerns relate to copyright, accuracy and bias, but it’s possible to conceive of a future where those problems are largely overcome.
The benefits to normal advertising are so significant, obvious and quick to enact that most organisations feel justified in immediately implementing the available technology.
As there is a strong self-regulatory system in place for normal advertising, brands feel like they’re competing on an even playing field. The industry has confidence that any bad actors will be found out and weeded out.
But the positive aspects of AI relating to normal advertising are also the reasons why the technology’s adoption in political advertising is so negative for our democracy.
Generative Adversarial Network (such as Midjourney) and language models (like ChatGPT) pose a significant threat to democratic election processes by enabling the creation and dissemination of personalised and attention-grabbing misinformation in large volumes and at high speed.
With generative AI, it is now possible for anyone with a smartphone to create deepfakes (manipulated videos, images or audio that look and sound real but have been altered to show something that did not actually happen). This technology can be used to create convincing fake news which can be disseminated widely through social media platforms and other online channels.
Generative AI tools can also be used to create highly convincing fake text, which can be used to spread false information.
In short: we can expect widespread impersonation of political figures or manipulation of events with the intent of influencing public opinion.
Despite the downsides of AI on election advertising being so much more significant than the issues surrounding normal advertising, there is no comparable regulation. Election campaigns can largely do what they want without fear of regulatory repercussions.
Parties, campaign groups and individuals can (and will, mark my words) attempt to influence the outcome of elections using AI with total impunity.
There is no doubt in my mind that public opinion is already being influenced by this technology at a small scale. It would take me less than five minutes to create an image that makes it look like Keir Starmer has dropped a piece of litter in a park in West London and post it to an Ealing community Facebook Page.
By the time the next general election begins in the U.K., confusion and doubt about the legitimacy of the political process will be unavoidable.
Small groups of specific groups of voters will be targeted with tailored messages that are designed to appeal to their fears, biases, or preconceptions using fake imagery, audio and video that is almost impossible to discern from the actual thing.
I suspect this will be primarily done on community WhatsApp groups and Facebook Pages. These are channels that are outside of media scrutiny and populated by an audience of unsuspecting people in geographies that are politically important.
I would hope that this sort of activity wouldn’t be peddled by those in the headquarters of major political parties; though there would be nothing to stop them and we’ve seen plenty of examples of misleading ads from political marketing professionals who should know better.
But as this technology is readily available and easy to use, it’s a fair assumption that between local party activists, people with too much time and bad actors, the temptation to peddle misinformation will be too great to resist.
When I first saw a deepfake election advert in June 2018 I was unsettled and could clearly see the negative ramifications.
I wrote that “If the twin forces of AI-enabled advertising and disintermediation between politicians and voters continue at their current pace – within a non-existent regulatory system – the misleading claims which we saw on both sides of the EU referendum campaign will soon look very quaint indeed.”
Well, the twin forces have not continued at their 2018 pace; they have accelerated beyond the imaginations of even many of those working at the cutting edge of technological innovation in Silicon Valley.
And that claim on the side of that bus will indeed look quite sweet when compared with what will emerge in the next year.
In my article in 2018 I also wrote that “If we want to prevent deepfake video (and other future technologies with potential to unjustly distort debate) becoming a feature of our democracy we need a framework for regulating what is and isn’t permissible in election advertising.”
That call for action has only become more urgent.
It is not obvious to me how the government could prevent “grassroots” or “bad actor” deepfake misinformation. But the least that could be done is preventing it from happening “officially” on an industrial scale by enacting electoral advertising content regulation that mainstream political parties and advocacy groups would have to abide by.
I’m genuinely scared about what’s ahead. I can’t believe that our politics has allowed us to enter this new phase of communication technology so poorly equipped to defend itself.
We’ve been warning about this for years, it’s time for action. Now. In twelve months it will be too late.
Benedict Pringle
We need your help to continue our campaign to prevent lies in electoral advertising. Please read how you can support our campaign here. We are making significant progress.