How might AI remake societies in the next 50 years? What can we do now to shape those shared futures?
Let’s prepare for the second and third order effects of AI. Read our latest recommendations.
How might AI remake societies in the next 50 years? What can we do now to shape those shared futures?
Let’s prepare for the second and third order effects of AI. Read our latest recommendations.
Workstream
Topic
Share
In 2024, a record four billion people were eligible to vote around the world. At the same time, rapid advances in artificial intelligence (AI) represented serious risks to the security and information integrity of elections.
Would deep fakes of candidates sow chaos? Would AI-generated robocalls trick people into avoiding their polling places? Might AI-generated spam lead people to believe their vote would not be counted? Would this in fact be the first AI election? Just hoping for the best seemed like the least good option.
So, we at Aspen Digital set up the AI Elections Initiative to build awareness and connection among elections officials, tech companies, civil society group, and media. Our aim was to foster resilience, address challenges, and explore solutions. We brought folks together all year and cross-educated groups to be ready for whatever might come at them.
How did things shake out? The worst-case scenarios did not come to pass – at least, not at scale. Some call it “the dog that didn’t bark,” which isn’t entirely true.
While AI did not deliver an outsized disruption, this past year laid the foundations for the future. In a new recap, our team reflects on AI’s role in 2024 elections around the globe and looks ahead to the next chapter of its influence on democracy. Read our 2024 recap.
2024 marked a historic moment, not just because of the U.S. Presidential election, but for the record number of national elections in countries worldwide —including some of the world’s most populous. Taiwan, Bangladesh, India, Indonesia, South Africa, Mexico, the European Union, France, and the United Kingdom all went to the polls this year. With AI exploding onto the scene following the launch of ChatGPT in late 2022 and alongside rapid advancements in tools for AI-generated text, images, and video, there was widespread concern that bad actors would leverage these technologies to create fake content and sow distrust.
To engage in these first AI elections, Aspen Digital held its first convening of the year in January on the sidelines of the Knight Foundation’s Informed conference. This gathering brought together researchers and civil society groups to examine the difference between AI’s real-world applications in elections and the more far-fetched possibilities that dominated public conversation. From this conversation, we outlined three key areas of concern:
From here, we set about preparing election officials and tech companies to address any or all of these risks.
While some AI tools were deployed with a purpose to mislead, their impact in 2024 was uneven and far more subtle than anticipated. Some called it “the dog that didn’t bark”, which isn’t entirely true, as described further below, but the worst-case scenario did not come to pass.
At the start of the year, concerns about AI’s potential to disrupt elections seemed justified. For example:
However, these instances—while concerning—remained isolated rather than widespread.
With the encouragement of Aspen and others, tech companies also took notable steps to mitigate AI misuse. At the Munich Security Conference in February, twenty leading tech companies signed an accord committing to managing the risks of deceptive AI election content on their platforms.
While high-profile deepfake videos primarily did not come to pass, a more insidious use of AI was deployed to significant effect. Authorities in several countries warned of how a Russia-backed social media campaign was using AI-powered software called Meliorator to create fake personas online that then spread disinformation. Some of these accounts had over 100,000 followers and were generated to look like U.S. citizens.
At the same time, researchers analyzing these first AI elections uncovered critical insights that show some beneficial uses of AI and ongoing challenges with disinformation spreading without AI’s aid. Those findings included:
Collaboration and early intervention this year played a critical role in raising awareness and mitigating some of AI’s potential harms. However, as the technology improves and bad actors become more advanced, AI’s influence on elections is only expected to grow.
AI-generated content will become more sophisticated, harder to detect, and cheaper to produce. Campaigns, political actors, and bad actors will likely scale up experimentation with AI tools. At the same time, ethical frameworks, regulations, and technological safeguards remain underdeveloped and must be prioritized to protect future elections.
With major elections on the horizon in Canada, Germany, and Australia in 2025—and the U.S. midterms in 2026—the foundations laid in 2024 provide a springboard for continued collaboration.
Aspen remains committed to raising awareness, fostering multi-sector solutions, and shaping the future of democracy at the intersection of technology and elections. Together, we can build resilience against emerging challenges and ensure AI strengthens, rather than undermines, trust in the electoral process.
Please see below a list of contributions from Aspen Digital’s AI Elections Advisory Council.
To address the risks posed by AI in elections, Aspen Digital launched its AI Elections Advisory Council, which brings together leaders from technology, civil society, election administration, national security, and academia. Led by co-chairs Klon Kitchen, Alondra Nelson, and Kim Wyman, the task force set out to identify emerging challenges and develop actionable strategies to mitigate AI-related risks.
In addition to its kickoff event at the Knight Informed Conference, Aspen hosted a series of impactful gatherings, including:
These convenings resulted in the development of checklists for mitigating risks across Aspen Digital’s three focus areas—hyperlocal voter suppression, language-based influence operations, and deepfaked public figures—providing actionable recommendations for election officials, campaigns, and platforms.
Generative AI was used to help edit this post and brainstorm the headline.
Following adoption of the Global Digital Compact, I made remarks at an informal stakeholder consultation hosted by the United Nations.
Canada’s National AI Institutes are at the forefront of the country’s mission to translate research in AI into commercial applications and grow.