“The world will know whether democracy lives or dies by the end of 2024,”
Nobel Peace Prize laureate Maria Ressa (see Politico interview)
In this blog I look at key concerns about data and AI midway through the big global election year of 2024 - how the year has panned out so far and what the research is telling us. It’s fascinating and incredibly complex to work how how we should regulate in future and what we should be really worried about.
(This blog is written in a personal capacity and not in connection with any of the organisations I work with).
A quick look back
Firstly, I take a quick look back to 2018 and what we were concerned about back then.
A seminal point in my career was being involved at the heart of the ICO investigations into use of personal data and political campaigning in 2018 - it was a key moment when the potential risks of data, AI during elections drew closer and a real debate opened up about what the harms were and how to address them.
The ICO investigation spanned 30 organisations, including Cambridge Analytica, and covered wide range of organisations: Brexit campaign groups, political parties, data brokers, social media platforms and other technology companies involved in the digital campaigning process.
Our policy report ‘Democracy Disrupted’ sought to provide a comprehensive overview of the political campaigning data ecosystem. We’d undertaken the most comprehensive investigation a data protection authority had ever undertaken into the uses of personal data in the political campaigning process. It pulled back the curtain on how the ecosystem operated, the risks of unfair and untransparent microtargeting and policy recommendations on what could be done to address them. The report set out the current scale of data collection in the UK and the risks posed by granular targeting, including the use of sensitive personal data by social media platforms.
Our report included a recommendation for a statutory code of practice for personal data use in political campaigning. The Government at the time rejected this, but the ICO produced comprehensive guidance in any case, which now sets out the key steps the regulator expects all relevant controllers to follow. We also made recommendations about transparency, including future development of ad libraries and statutory requirements for political ad labelling.
As part of our 2018 policy approach we also commissioned a report from Jamie Bartlett and his team at Demos, on the Future of Political Campaigning. We asked them to assess the current state of the art and where it could go in five years. The report drew out key trends:
Detailed audience segmentation
Cross device targeting
Growth in use of ‘psychographic’ or similar techniques
Use of AI to target, measure and improve campaigns
Use of artificial intelligence to automatically generate content
Using personal data to predict election results
Delivery via new platforms
Looking globally, in 2019 the ICO also commissioned Professor Colin Bennett from the University Of Victoria to produce a report entitled “Privacy, Voter Surveillance and Democratic Engagement: Challenges for Data Protection Authorities”, to be discussed at the Global Privacy Assembly.
The report highlights the challenges faced in some jurisdictions, such as the US and Canada, where data protection and privacy laws don’t apply to political parties at federal level. The significance of privacy protection for democratic rights is also clearly set out in the report. His report also discussed the balance between rights to privacy, and the rights of political actors to communicate with the electorate, and how they will be struck in different ways in different jurisdictions depending on a complex interplay of legal, political, and cultural factors. The report identified five general patterns of data-driven elections: Permissive, Exempted, Regulated, Prohibited and Emerging.
While privacy was a key concern in 2018, concerns have now broadened into the rise of misinformation and deepfake AI, and how they could impact voters.
Setting the scene in 2024
This year will see votes in at least 64 countries (plus the European Union)—representing a combined population of about 49% of the people in the world or around 1.5 Billion people. Kings College London list all the likely elections this year.
For the first time, AI systems, such as Generative AI, are available as mainstream tools to be used during elections. As Politico highlight, some frightening powers of creation are at people’s fingertips:
From his Boston apartment, Callum Hood has the power to undermine any election with a few keystrokes. Hood, a British researcher, fired up some of the latest artificial intelligence tools made by OpenAI and Midjourney, another AI startup. Within seconds of him typing in a few prompts — “create a realistic photo of voter ballots in a dumpster”; “a photo of long lines of voters waiting outside a polling station in the rain”; “a photo of Joe Biden sick in the hospital” — the AI models spat out reams of realistic images.
The potential for harm at the start of 2024 was clear.
A report “New Political Ad Machine: Policy Frameworks for Political Ads in an Age of AI” by Brennen and Perault at University of North Carolina at Chapel Hill highlights four key risks about uses of Generative AI in political campaigning:
1. Scale: GAI may facilitate an increase in the volume of deceptive content in political ads by lowering the cost and difficulty of producing manipulated content.
2. Authenticity: GAI may produce falsehoods that look more realistic or that appear to come from authentic sources.
3. Personalization: GAI may allow advertisers to better personalize targeted content to smaller audience segments, increasing the effectiveness of deceptive ads.
4. Bias: GAI may exacerbate bias and discrimination in political ads.
Respected international think tank, Chatham House, also talked about AI-generated fake videos, ‘rumour bombs’ and ‘disinfo’ threatening key votes in America, India and Taiwan. Into the mix of the technology we can also add the risks from states such Russia, China and Iran seeking to disrupt democracy.
The Politico article referenced above provides numerous examples of how generative AI is entering use in a range of counties, and the bad intent behind many of the uses, but recognises that no-one really knows what impact this having, also noting that many high profile deepfakes have been quickly debunked.
At the Oxford Internet Institute Prathm Juneja and Keegan McBride set out how data and AI are changing US elections - they argue that the “current focus on hypothetical risks created by AI distracts from what is perhaps an even more important story: how AI is changing and transforming political parties themselves.”. They see that data in voters is becoming even more important to political parties and that campaigns will be hyper-personal, driven by advances in AI that allow campaign staff to automatically tailor messages to specific groups.
Complexity and the need for evidence
Before moving into a deeper discussion of the latest evidence, and some detail about what has happened so far in 2024, it is important to recognise the full range of actors seeking to use AI and data driven technologies in elections - from legitimate registered parties and campaigners, extreme political groups, lone actors seeking to cause disruption or seek publicly, actors from countries seeking to harm democracy and pursue certain political outcomes. All this adds to the complexity.
There is risk that we could oscillate between alarm and complacency, or just miss where the real risks and harms lie. Academic evidence will be vital in understanding the wider picture, comparing, contrasting, deciphering the important trends and ensuring policy makers have the full range evidence to consider when discussing whether new regulation is needed.
What use is being made of micro targeting and generative AI in 2024?
Zelley Martin et al, from the University of Texas at Austin authored a report “Political Machines: Understanding the Role of AI in the U.S. 2024 Elections and Beyond”. They interviewed campaign consultants from both major political parties, vendors of political generative AI tools, a political candidate utilizing generative AI for her campaign, a digital strategist for a Democratic advocacy organization, and other relevant experts in the digital politics space. Some of their key findings were:
Generative AI already accelerated the scale and speed of data analysis, data collection, and content generation by increasing the speed at which political consultants can A/B test content and unearth the approaches and language most compelling to target audiences.
While some interviewees expressed that GenAI-facilitated hypertargeting may be overblown, others emphasized that with coming advances in the technology, as well as current use-cases in data collection and analysis, consultants will be able to create hyper-personalized content that will “move the needle.”
Interviewees expressed ardent hopes that the use of GenAI in politics would democratize the campaign space, allowing for increased engagement with marginalized and young voters, and would empower smaller campaigns and nonprofits.
Sam Jeffers founder of Who Targets Me, a small group of activists creating and managing a crowdsourced global database of political adverts placed on social media, was interviewed in March 2024 and discussed his understanding of how generative AI was being used in elections: “From conversations we’ve had with political campaigners, they are exploring using Large Language Models (LLMs) to solve ‘blank page’ type problems, where they just need something to get the strategic and creative juices flowing, and to look at how to lightly customise messages by rewriting them for specific audiences. But in terms of a broad integration into campaigning practice? We don’t see that happening yet.”
Jeffers is also skeptical about whether hyper-personalised ads are the new reality, citing limited resources of digital campaigns in Europe, the limitation placed on personalisation by social media platforms post Cambridge Analytica, questions about whether they are effective versus general messages like “Make America Great Again”.
In an interview with the Guardian, former Conversative Party digital strategist Tom Edmonds suggested that UK political ads were moving away from microtargeting. He said that the UK General Election would be defined by parties spending tens of millions on online adverts designed to reach as many people as possible. “It’s got to the stage of being like TV advertising – it’s top-level messaging.” He also noted that Facebook now prevented political campaigns from using many of the targeted methods deployed in past elections.
What impact does micro targeting and Generative AI have?
Oxford Internet Institute academics, Kobi Hackenburg and Professor Helen Margetts have published research in 2024 that used a custom web application in a randomized control experiment. They integrated demographic and political data into GPT-4 prompts in real time and generated thousands of unique messages tailored to persuade individual users. Their key finding was this:
“Deploying this application in a pre-registered randomized control experiment, they found that messages generated by GPT-4 were broadly persuasive. Crucially, however, in aggregate, the persuasive impact of microtargeted messages was not statistically different from that of non-microtargeted messages. This suggests — contrary to widespread speculation — that the influence of current LLMs may reside not in their ability to tailor messages to individuals, but rather in the persuasiveness of their generic, non-targeted message.”
They also note that GPT-4 and similar next-generation AI models may not be able to tailor messages well enough because of a misunderstanding of what people are really like. The targeted messages for the hypothetical voters was also found to be overly generic and could oversimplify the common view of any group. The research also highlights the potential for future advances and the what future research should cover.
Another academic article by Simchon et al., “The persuasive effects of political microtargeting in the age of generative artificial intelligence”, presented the results of four studies examining the effectiveness generative AI and personality inference from consumed text. Their results demonstrate that personalized political ads tailored to individuals’ personalities are more effective than non-personalized ads. Additionally, they highlighted feasibility of automatically generating and validating these personalized ads on a large scale.
While noting other studies have failed to find a benefit when comparing to a generic adverts Simchon et al. illustrate that a number studies have found microtargeting can increase voter turnout during tight political competitions based on highly salient issues; it can prevent voter defection from a party they initially favored, and it can have an effect even when ads are targeted on the basis of a single personal attribute.
Considering the impacts of generative AI on misinformation Simon et al. argue that the Fears about the impact of generative AI on misinformation are overblown. Taking three risks—quantity, quality, and personalization—they argue that they are at the moment speculative, and that existing research suggests at best modest effects of generative AI on the misinformation landscape.
Some impacts are therefore emerging, but the picture is mixed and we are not yet seeing clear and unequivocal evidence of harms emerging in the research.
Algorithmic risks, recommender systems and misinformation
The risks don’t just come from microtargeted ads and uses of generative AI. As more and more people get use social media as their primary news source, the role of recommender systems as sources of information and news related to elections becomes ever more important. Policy makers have been concerned about how recommender systems can create filter bubbles, channel people towards more extreme views. Nearly three-quarters of the videos watched on YouTube are delivered via its recommendation algorithm.
Writing in Internet Policy Review, Whittaker et al published research in 2021 that found that YouTube does amplify extreme and fringe content, Reddit and Gab—did not. However research by Ibrahim et al,. published on PNAS Nexus, constructed archetypal users across six personas in the US political context, ranging from Far Left to Far Right. Their controlled experiment consumed over eight months worth of videos and were recommended over 120,000 unique videos. It found that while the algorithm pulls users away from political extremes, this pull is asymmetric, with users being pulled away from Far Right content stronger than from Far Left
Writing for Tech Policy Press, two former Meta scientists, Matt Motoyl and Jeff Allen, have highlighted the risks posed by recommender systems on social media. They argue that with engagement-based ranking systems people are disproportionately likely to engage with more harmful content that is divisive and contains misinformation. They note that for 2020 elections Facebook implemented a series of “break glass” measures to improve the integrity of its platforms: users were shown credible news sources, not engagement-driven disinformation; election lies and threats were disallowed and aggressively policed.
Independent US Researchers were given unprecedented access to Meta’s data during the 2020 elections. Meta claimed the studies’ results show its platforms don’t cause political polarization but many academics argue that there is much more complex picture. Nick Clegg from Meta stated “the experimental findings add to a growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization” or have “meaningful effects on” political views and behavior.” Even though the data access was unprecedented most academics don’t feel comfortable with such a sweeping statement and point to a need for more longitudinal data, to be more full representative of the users’ experience on the platform. This Wired article has good breakdown of the far more nuance findings., noting that one study did found resharing elevates “content from untrustworthy sources.” Another found that users “whose feeds excluded reshared content did end up consuming less partisan news, they also ended up less well informed in general”.
Michael Wagner, a professor of journalism and communication at University of Wisconsin-Madison, who helped oversee Meta’s 2020 election project said this: “I don’t think the findings suggest that Facebook isn’t contributing to polarization,” says Wagner. “I think that the findings demonstrate that in 2020, Facebook wasn’t the only or dominant cause of polarization, but people were polarized long before they logged on to Facebook in 2020.”
More than a dozen civil society groups in Europe have called for stronger Digital Service Act guidance from the European Commission to address the risks that recommender systems pose for democracy . They’ve called for platforms to “disable profiling-based recommenders by default - prioritising privacy, preventing exploitation and a move away from engagement-based ranking - away from a system designed to favour and incentivise extreme and emotive content.”
However a study, published in Nature, from researchers at the University of Texas at Austin found that replacing the algorithm with a simple chronological listing of posts from friends – an option Facebook recently made available to users – had no measurable impact on polarization.
Motoyl and Allen both work for the Integrity Institute and have published a report with a series of proposals on what a responsible recommender system during elections looks like.
All in all, recommender systems are clearly a concern when considering the impact of AI on the political process given that digital native generations will get a greater % of their news during the electoral process from social media. More research is still need to unpick the impacts and then consider how regulation can best target the risks. The EU Digital Services Act will be a large and important area of research, to see what impact regulation has.
Also noted in Mozilla report below, evidence exists that election safeguards may be disproportionately focused on the US and EU.
Tech firms - taking action?
Major tech companies have signed up an Elections Accord pledging to combat Combat Deceptive Use of AI in 2024 Elections. They have signed up to seven goals from prevention to resilience and to develop and implement technology to mitigate risks related to deceptive AI election content, including detection of its distribution.
In March 2024 the European Commission published guidelines under the Digital Services Act (DSA) for the mitigation of systemic risks online for elections. These were addressed to Very Large Online Platforms and Search Engines designated under DSA, as the law came into force.
Some platforms have also sought to prevent generative AI from engaging with election content, for example Google has restricted its AI chatbot Gemini from answering questions on 2024 elections. They have published blogs covering the safeguards they have in place for the UK, Indian and EU elections.
Nick Clegg set out Meta’s approach to elections in a November 2023 blog - they will block new political ads during the final week of the 2024 US election campaign and that they will they will require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases. Meta also operate a publicly available ad library (running for seven years). Meta also warned that Russian and Chinese interference networks are ‘building audiences’ ahead of 2024.
There is also significant concern that Meta is deprecating its Crowdtangle research data access tool, promoting its Content Library & Content Library API instead. The tool has been of considerable value to researchers covering the impact of social media on elections and democracy (see Tech Policy Press for the implications and where the EU DSA may enable new access). The deprecation also now forms part of formal proceedings by the European Commission against Meta under the DSA.
OpenAI have announced a number measures to address risks related to use of their tools during elections:
ChatGPT directed users to the European Parliament’s official source of voting information when asked certain questions about the election process such as where to vote
Providing researchers with early access to a new tool that can help identify images created by DALL·E 3
They don’t allow people to build ChatGPT applications for political campaigning and lobbying (though how this is policed in an further question)
Generative AI image tool Midjourney is blocking user requests to create AI-generated images of President Joe Biden and former President Donald Trump in the run up to the 2024 US election.
The Mozilla Foundation argues that there is an increasing body of evidence showing that the tech platforms’ neglect of Global Majority countries has led to a disproportionate amount of harm in these regions, noting that social media companies have extremely limited operational and contextual resources in these countries, and yet very large user bases.
What can be learned from the Indian elections
Writing in The Conversation, Vandinika Shukla and Bruce Schneier from the Harvard Kennedy School highlight that the “Indian election was awash in deepfakes – but AI was a net positive for democracy.” They argue that campaigners used AI for typical political activities, including mudslinging, but primarily to better connect with voters. It is estimated that political parties in India spent an estimated US$50 million on authorized AI-generated content for targeted communication.
They also highlight the multilingual benefits in a country with 22 official languages and how AI improves accessibility. The article also recognises the adversarial uses and how incited violence was conveyed using generative AI tools. They also argue that India’s early adoption of generative AI points to lessons for other countries and India’s return to a “deeply competitive political system especially highlights the possibility for AI to have a positive role in deliberative democracy and representative governance.”
Taiwan’s pre-emptive response
Writing in the Journal of Democracy, John Glenn highlights how Taiwan sought to address the risks of digital manipulation ahead of its 2024 elections. The risks included a video circulated on election day, that went viral, alleging ballot rigging.
Taiwan took a “whole of society response” to address the risks. Taiwan’s Digital Minister Audrey Tang observed, “What’s crucial is pre-empting information manipulation before it reaches people. It requires understanding the overarching narratives the attackers use — for example, that democracy never delivers, or that democratic processes are corrupt. We then tell people these narratives are going to surface. Ideally people will get the real information before they receive the misinformation.”
The response included civil society groups, independent fact-check groups, and lawyers to identify and respond to information manipulation and attacks on opposition leaders and journalists.
Can regulation address the risks?
The EU probably has the most comprehensive regime to tackle the risks of misinformation during elections under Digital Services Act. Also, the new Regulation on the transparency and targeting of political advertising (TTPA) has been passed and should take effect in autumn 2025. The DSA puts requirements on the platforms to assess risks and mitigation in place, while the TTPA creates transparency labelling requirements, strict rules on consent for targeted political ads, prohibition on use of special category data for targeting and a ban on the provision of advertising services to third country sponsors three months before an election or referendum
Drawing on my recent work on the effectiveness of regulation in protecting children online it will now be important that researchers assess how effective the DSA has been during the recent European Parliament elections and other key national elections (such as France).
In the UK there has been considerable criticisms of the 2023 Online Safety Act’s failure to address the risks of misinformation, including during elections. The only references to misinformation in the Act are about setting up a committee to advise Ofcom and changes to Ofcom’s media literacy policy. The UK Parliament Joint Committee on the Draft Online Safety Bill recommended that “Disinformation and misinformation surrounding elections are a risk to democracy. Disinformation which aims to disrupt elections must be addressed by legislation. If the government decides that the Online Safety Bill is not the appropriate place to do so, then it should use the Elections Bill which is currently making its way through Parliament.” The government’s response highlighted other measures that it was taking, and the balance with freedom of expression, and did not take this recommendation forward.
The Online Safety Act also fails to replicate the access provisions for researchers in the same way as the DSA - something that the new UK Government should address urgently.
In the UK, the Labour Together think thank has proposed a cross-party pledge not to use deepfakes that constitute electoral misinformation. They also called for an exception to the ban on media coverage of a general election on polling day - to allow for mainstream media to rebut fraudulent misinformation that could be going viral as people head to vote. Policy ideas which could inform the UK Labour Party’s approach should they win power.
Former chief executive of the UK Liberal Democrats from 2012 to 2017, Tim Gordon, writing for Tortoise in a personal capacity, has also called for a clear code of conduct that all political actors – parties and campaigners – sign up to and for key stakeholders to track what happens.
In the US, the Senate Rules Committee approved three bipartisan bills aimed at addressing AI's impact on elections: the Protect Elections from Deceptive AI Act, which would ban the use of AI to create materially deceptive content falsely depicting federal candidates in political ads; the AI Transparency in Elections Act, requiring disclaimers on political ads that use AI-generated images, audio, or video; and the Preparing Election Administrators for AI Act, which directs the Election Assistance Commission to issue guidelines to help election administrators address AI's impact on election administration, cybersecurity, and disinformation. Though it is unclear whether they can make it to the statute book.
There is limited regulation US but one example is the Federal Communications Commissioned fine for a robocall campaign that used AI-generated Biden voice clone.
Conclusion
We have clearly come some way since Cambridge Analytica - it is harder to undertake granular political targeting on some platforms and in some jurisdictions. We also have better tools such as ad libraries and ad labelling.
Heading into the second half of 2024, and the UK Poll next week, there is a mixed picture emerging. New uses of AI are emerging but the impacts are unclear. There is evidence that microtargeting and uses of generative AI may not yet bring big gains for political parties. Politically motivated AI content can influence people but not as much as many expected or in comparison to non personalised content.
As well as generative AI risks and microtargeting of ads it will be vital to continue to assess the impact of social media recommender systems in promoting content during elections.
Regulation differs across the world, depending on the appetite for state regulation. It will now be important to see the evidence of the effectiveness of the EU DSA and what can be learnt from the experience.
Technology companies are putting measures in place, more than ever before, and we again need to assess their effectiveness.
It remains to be seen how much benefit will be gained from labelling and watermarking of AI generated content but assessment of these mechanisms will also be vital in learning about what safeguards will be effective in protecting elections in future.
While each country will need to implement public awareness measures in the context of their own constitutional and cultural situation, there is hopefully something that can be learned from the experience in Taiwan and the value of wide scale civic engagement with misinformation.
It would clearly be wrong to be complacent - we’re still at such an early stage with generative AI and other uses of AI. Continuing to press for data access and transparency from technology companies continues to be vitally important in tracking the impacts. Governments and policy makers must also commit to engaging with the evidence and a wide range of stakeholders when discussing the solutions.
Lastly, it will be vitally important that we learn and assess impacts in all parts of the world and ensure the Global South is not neglected when companies roll out safeguards and mitigations.
How are you being targeted?
If you want understand what is happening in your country the Who Targets Me tool is a great way to explore political ad spending by country and you can learn who is targeting you by using their browser extension.