In 2024, a record four billion people were eligible to vote around the world. At the same time, rapid advances in artificial intelligence (AI) represented serious risks to the security and information integrity of elections.
Would deep fakes of candidates sow chaos? Would AI-generated robocalls trick people into avoiding their polling places? Might AI-generated spam lead people to believe their vote would not be counted? Would this in fact be the first AI election? Just hoping for the best seemed like the least good option.
So, we at Aspen Digital set up the AI Elections Initiative to build awareness and connection among elections officials, tech companies, civil society group, and media. Our aim was to foster resilience, address challenges, and explore solutions. We brought folks together all year and cross-educated groups to be ready for whatever might come at them.
How did things shake out? The worst-case scenarios did not come to pass – at least, not at scale. Some call it “the dog that didn’t bark,” which isn’t entirely true.
While AI did not deliver an outsized disruption, this past year laid the foundations for the future. In a new recap, our team reflects on AI’s role in 2024 elections around the globe and looks ahead to the next chapter of its influence on democracy. Read our 2024 recap.
2024 elections: Hope for the best, but prepare for the worst
2024 marked a historic moment, not just because of the U.S. Presidential election, but for the record number of national elections in countries worldwide —including some of the world’s most populous. Taiwan, Bangladesh, India, Indonesia, South Africa, Mexico, the European Union, France, and the United Kingdom all went to the polls this year. With AI exploding onto the scene following the launch of ChatGPT in late 2022 and alongside rapid advancements in tools for AI-generated text, images, and video, there was widespread concern that bad actors would leverage these technologies to create fake content and sow distrust.
To engage in these first AI elections, Aspen Digital held its first convening of the year in January on the sidelines of the Knight Foundation’s Informed conference. This gathering brought together researchers and civil society groups to examine the difference between AI’s real-world applications in elections and the more far-fetched possibilities that dominated public conversation. From this conversation, we outlined three key areas of concern:
- Hyperlocal Voter Suppression: AI could be used to spread false information about when, where, and how to vote, targeting specific communities to reduce turnout.
- Language-Based Influence Operations: AI significantly lowers the barrier for bad actors to produce malicious content in multiple languages. By automating translation and accurately deploying idioms and slang, such operations become more sophisticated and harder to detect.
- Deepfaked Public Figures: With AI tools becoming increasingly powerful, it is now easier than ever to create fake content depicting public figures saying or doing things they never did, eroding trust in media and political discourse.
From here, we set about preparing election officials and tech companies to address any or all of these risks.
The dog that didn’t bark?
While some AI tools were deployed with a purpose to mislead, their impact in 2024 was uneven and far more subtle than anticipated. Some called it “the dog that didn’t bark”, which isn’t entirely true, as described further below, but the worst-case scenario did not come to pass.
At the start of the year, concerns about AI’s potential to disrupt elections seemed justified. For example:
- In the United States, ahead of New Hampshire’s primaries, a robocall using AI-generated audio mimicked President Biden’s voice, urging voters to stay home.
- In Slovakia’s 2023 election, a fake recording of a candidate saying he rigged the election was created using AI and went viral, costing him the election.
- In Bangladesh, a conservative majority-Muslim country, Rumeen Farhana, an opposition-party politician, faced sexual harassment online when an AI deepfake photo of her in a bikini emerged on social media.
However, these instances—while concerning—remained isolated rather than widespread.
With the encouragement of Aspen and others, tech companies also took notable steps to mitigate AI misuse. At the Munich Security Conference in February, twenty leading tech companies signed an accord committing to managing the risks of deceptive AI election content on their platforms.
Under the hood: where AI showed up in this year’s information campaigns
While high-profile deepfake videos primarily did not come to pass, a more insidious use of AI was deployed to significant effect. Authorities in several countries warned of how a Russia-backed social media campaign was using AI-powered software called Meliorator to create fake personas online that then spread disinformation. Some of these accounts had over 100,000 followers and were generated to look like U.S. citizens.
At the same time, researchers analyzing these first AI elections uncovered critical insights that show some beneficial uses of AI and ongoing challenges with disinformation spreading without AI’s aid. Those findings included:
- Half of AI deepfakes are not deceptive: Many uses of AI, such as accessibility tools, translation, or parody, were benign and beneficial.
- Deceptive content can thrive without AI: While AI makes content creation cheaper, bad actors can replicate similar outcomes without AI tools.
- Demand, not supply, drives misinformation: Addressing the desire for misinformation is far more impactful than focusing solely on AI’s role in its production.
Looking ahead: The road to 2025 and beyond
Collaboration and early intervention this year played a critical role in raising awareness and mitigating some of AI’s potential harms. However, as the technology improves and bad actors become more advanced, AI’s influence on elections is only expected to grow.
AI-generated content will become more sophisticated, harder to detect, and cheaper to produce. Campaigns, political actors, and bad actors will likely scale up experimentation with AI tools. At the same time, ethical frameworks, regulations, and technological safeguards remain underdeveloped and must be prioritized to protect future elections.
With major elections on the horizon in Canada, Germany, and Australia in 2025—and the U.S. midterms in 2026—the foundations laid in 2024 provide a springboard for continued collaboration.
Aspen remains committed to raising awareness, fostering multi-sector solutions, and shaping the future of democracy at the intersection of technology and elections. Together, we can build resilience against emerging challenges and ensure AI strengthens, rather than undermines, trust in the electoral process.
Appendix
Please see below a list of contributions from Aspen Digital’s AI Elections Advisory Council.
To address the risks posed by AI in elections, Aspen Digital launched its AI Elections Advisory Council, which brings together leaders from technology, civil society, election administration, national security, and academia. Led by co-chairs Klon Kitchen, Alondra Nelson, and Kim Wyman, the task force set out to identify emerging challenges and develop actionable strategies to mitigate AI-related risks.
In addition to its kickoff event at the Knight Informed Conference, Aspen hosted a series of impactful gatherings, including:
- National Association of Secretaries of State Winter Conference (Washington, DC)
- Presented to over 250 election officials on AI’s impact on election administration. The session, which included CISA, provided practical guidance on mitigating AI-related risks.
- AI’s Impact on Global Elections Forum (Columbia University)
- Co-hosted with the Institute of Global Politics, this public event explored growing concerns over diminished social trust and how bad actors are leveraging AI to disrupt online information ecosystems.
- Technology and Civil Society Briefing
- Convened top voices in elections, technology, and investment for an in-person meeting focused on actionable risks and opportunities for AI risk mitigation.
- Roundtable on Marginalized Communities and Voting Access
- Hosted a discussion with over 30 leaders from tech, government, civil rights, and election administration about technology’s impact on historically marginalized communities and their access to the ballot box.
- Expert Media Briefings for Journalists
- Programmed a day-long series of briefings for editors, reporters, and columnists, helping them navigate the challenges of election coverage in the age of AI. This was the third installment of Aspen Digital’s biannual election-focused media events.
- Webinar on the 2024 election and certification process
- Conducted a 1-hour webinar outlining five critical moments in the 2024 election and certification process, helping stakeholders prepare for key milestones.
These convenings resulted in the development of checklists for mitigating risks across Aspen Digital’s three focus areas—hyperlocal voter suppression, language-based influence operations, and deepfaked public figures—providing actionable recommendations for election officials, campaigns, and platforms.
Generative AI was used to help edit this post and brainstorm the headline.