2024 Aspen Cyber Summit

Connecting the dots between the cybersecurity challenges of today and the topics that matter to you.

Join us for the 9th annual Aspen Cyber Summit in Washington, DC, on September 18!

Aspen Digital

A.I. Risks Facing the 2024 US Elections

A geometric pattern of overlapping squares in shades of blue, purple, and teal.
June 17, 2024

Tom Latkowski

Program Associate

In this high-stakes election year, it’s never been more important that voters have unimpeded access to reliable and truthful information about the candidates, the issues and the voting process. Amid an already fraught information ecosystem, artificial intelligence (AI) adds an additional alarming layer of risk.

That’s why Aspen Digital’s AI Elections Advisory Council created three risk checklists focused on areas where AI tools make it easier for bad actors to discourage and disinform: hyperlocal voter suppression, language-based influence operations, and deepfaked public figures.

The Advisory Council is a non-partisan group composed of civil society and technology leaders taking steps to build democratic resilience in the face of AI. The Council is chaired by Alondra Nelson, Klon Kitchen, and Kim Wyman. It is part of our ongoing AI Elections Initiative, which works to secure the US elections in November and beyond.

To help ground public discussion and leadership action in the facts underlying specific AI concerns this election year, these new AI Election Risk Checklists center on:

  • Hyperlocal Voter Suppression: For years, bad actors have attempted to impede voting by spreading false information about when, where, and how to vote. AI tools can generate convincing content quickly, including personalized details and interactive exchanges that add credibility to false information. These tools make text message campaigns, interactive robocalls, and fake local news websites cheaper to run at scale.
  • Language-Based Influence Operations: Artificial intelligence makes it easier to create content in any language, using automated translation tools. While these tools can be beneficial in many settings, in the wrong hands they can make spreading falsehoods easier, faster, and harder to detect.
  • Deepfaked Public Figures: As artificial intelligence improves, it has become easier to create convincing images, audio, and video depicting public figures saying or doing something that they did not.

These risks are not inevitable. The checklists detail steps that election administrators, social media and messaging platforms, AI labs and companies, news media, advocates, and civil society groups should take to mitigate the harms that AI could pose to our democracy.

As Americans prepare to vote this November, we must not let fears of artificial intelligence deter us from participating in our democracy. Bad actors might try to keep us home this election, but they can only succeed if we let them. At Aspen Digital, we’re committed to partnering with those committed to election resilience to keep our democracy strong and to help ensure that every American has the information and access they need to cast their vote.

The AI Elections Advisory Council consists of:

  • Alexandra Reeve Givens (Center for Democracy & Tech. – Pres. & CEO)
  • Alexandra Sanderford (Anthropic – Head of Policy and Enforcement)
  • Arjun Gupta (TeleSoft – Managing Partner)
  • Becky Waite (OpenAI – Global Elections)
  • Ben Scott (Reset – Executive Director)
  • Brad Carson (Americans for Responsible Innovation – President)
  • Brian Hooks (Stand Together – Chairman & CEO)
  • Chris Krebs (SentinelOne & CISA -Fmr. Director)
  • Claire Wardle (Brown – Professor)
  • Damon Hewitt (Lawyers Committee on Civil Rights – Pres. & ED)
  • Danielle K. Citron (UVA Law – Professor)
  • Dave Willner (Consultant & OpenAI – Fmr. Head of Trust & Safety)
  • David Becker (Elections Innovation Center – ED & Founder)
  • David Vorhaus (Google – Director, Global Elections Integrity)
  • Gary Marcus (Social Scientist)
  • Ginny Badanes (Microsoft – Democracy Forward, Director)
  • Irene Solaiman (Hugging Face – Head of Global Policy)
  • Jane Harman (fmr. U.S. Rep. for CA-36 & Wilson Center, President)
  • Jennifer Morrell (Election Group – Chief Executive Office)
  • Joe Amditis (Center for Cooperative Media – Assistant Director)
  • Justin Erlich  (TikTok – Global Head of Issue Policy)
  • Kelly Born (Packard – Democracy, Rights, and Governance Director)
  • Larry Norden (Brennan Center – Senior Director)
  • Maya Wiley (Leadership Conference for Civil & Human Rights – Pres. & CEO)
  • Michele Jawando (Omidyar Networks – SVP)
  • Nate Persily (Stanford Law – Professor)
  • Neil Chilson (Center for Growth & Opportunity – Sr. Research Fellow)
  • Nick Penniman (Issue One – Founder & CEO)
  • Raffi Krikorian (Emerson Collective – CTO)
  • Rebecca Finlay (Partnership on AI – CEO)
  • Sam Gregory  (WITNESS – Executive Director)
  • Thomas Rid (Johns Hopkins – Professor of Strategic Studies)
  • Vilas S Dhar (McGovern Foundation – President)

Browse More Posts

What the Winners Have to Say

What the Winners Have to Say

Here’s what the winners of the 2023 Reporting on AI Hall of Fame have to say about the importance of good writing about AI.

We Need Better Reporting on A.I.

We Need Better Reporting on A.I.

While it’s great for fiction to inspire what the future could look like, it can be dangerous if those fantasies creep into reporting on AI.