On Tuesday morning Lindy Cameron, the CEO of the National Cyber Security Centre, spoke on Radio 4 about the ‘more complex threats’ posed by AI.
Prominent among the threats discussed was the one to voters from political disinformation (PD), as the upcoming election will be “the first in the era of generative AI.” Of course we agree, as does Open Al CEO Sam Altman, who describes it as “one of my areas of greatest concern”.
Take a gander at the NCSC website and you’ll find a great deal of reassuring work and solid advice but not, as far as we could establish, a practical upstream solution to PD and you can’t, to borrow a Gordon Hinckley phrase from R4 that also stuck in our heads: “plough a field simply by turning it over in your mind.”
The issue in the AI field
Deepfakes, disinformation and misinformation are far from new; we ourselves have been wallowing in this stuff for longer than we care to recall. What is new is that AI will increase the frequency and presentation quality of mis and dis information and as this next General Election is anyway likely to be pretty intense, for which read the dirtiest in our history, we need to develop solutions right now. How will a regular voter know, for example, if a political party manifesto available online is what it says it is?
The principal AI-related issue is not whether we accept the information itself – we will make our own judgements – but whether we can be certain of its source or, put another more technical way,whether we have a means to confirm the security and provenance of political information.
Information slices
There is such a means; before we set it out, we should briefly and simply segment political information, which in this context we can describe as ‘organic’ and ‘paid’. The first category includes statements and opinions that are released via established carriers, both electronic and otherwise, and it would include, for example, the manifesto we reference earlier, or it might be a newspaper piece by, gulp, Suella Braverman.
Much of this organic form of information is ‘regulated’ by the media, a pretty good example of which is the article to which we nervously alluded. Our friends at Full Fact have also made progress in ‘regulating’ political statements. In other words, a great deal of political utterance is subject to reaction by commentators and opposition and the originators themselves. We can decide what we think, or simply choose to ignore it. Nevertheless, a significant quantity of organic political material will still reach the voter via third parties, community websites, email programmes, social media such as X/Twitter etc. and in that context it will not arrive with an opinion from, say, Laura Kuenssberg.
‘Paid’ information, I.e. that via advertising, is similarly ’unprotected’ and entirely unregulated for factual content. If you’re wondering why that might matter, think big red bus and £350mil. The ‘identity’ legislation, meaning the ID requirements for leaflets and in digital space, is so poorly drafted (deliberately, we suspect) that it does not even require the naming of the political party, just the promoter, largely irrelevant, and the candidate, often unknown. Should you be thinking identification will be obvious from the context, we refer you to CCHQ’s ‘Fact Check UK’ shenanigans at the GE in 2019.
So it is clear that we need proper AI-disinformation-proof identification procedures in both channels. Let’s start with paid, as it is in advertising’s particular circumstances that we can make real progress and that we know best (we’re advertising people, a confession which has probably just skewered our case).
We benefit (in this context) from the concentration of power in a few digital platforms: most advertising is either bought or sold or both in a relatively narrow ‘funnel’. On arrival at, for example Facebook, an ad is allocated an ID number, which identifies and confirms the source via secure channels between advertiser and publisher. A statement in sensibly sized text in the ad will read:
This ad is verified by (Facebook) to be from (the political party or third party).
‘Organic’ material is obviously a more difficult area in which to establish security and it may be that it requires cross party and publisher discussion. Nevertheless, each piece of publicity material might carry the message:
If you are uncertain of the source of this material you can check on politicalpartyverification.co.uk
Obviously, such a process will require the development of a database containing all of the most significant publicity material (organic and paid) and a decent search function. A recommendation, incidentally, we made several years ago for paid advertising material and one that is also part of EU plans in their Political Advertising Regulation.
And a long overdue ad code
Advertising content, versus source, should also be accurate (a proposition that nine out of ten voters agree with, according to our 2019 YouGov research). A political advertising code was unanimously recommended by a cross-party House of Lords committee in June 2020. The requirement was only for factual claims to be accurate; political opinions or policy statements were not part of proposals. Nevertheless, government response was less than enthusiastic, largely on the grounds of ‘free speech’. We’re hoping that the arrival of the aforementioned complex threats posed by AI might prompt a reconsideration, which could bring about in advertising a statement such as:
This ad is verified by (Facebook) to be from (the political party or third party) and is subject to a code that requires factual accuracy.
So what should happen next?
More listening, then some field ploughing.
Listening, please, by Lindy Cameron, by the Cabinet Office, by DCMS, by CSPL, by those of influence in all political parties, and crucially by the major digital platforms.
When some of the above have listened, we and many others can help with the ploughing of the field.
If you like our proposal please feel free to share it on Twitter / X via this post.