The A.I. Elections Advisory Council

A community of leaders working to build social trust and election resilience.

The AI Elections Advisory Council is a constructive space for experts to share key insights across civil society, government, and industry throughout the US elections.

Aspen Digital

Data, Democracy, and Decisions

A.I.’s Impact on Elections

A brightly colored overlaid pattern of the US Capitol Building.
May 1, 2024

The Tech Accountability Coalition, part of Aspen Digital, hosted a virtual session on Wednesday, May 1, about the potential impacts of AI on our democracy.

Moderator Josh Lawson, Director of AI & Democracy at Aspen Digital, led experts at the intersection of tech and government through a discussion of the challenges and opportunities AI presents for maintaining the integrity and fairness of elections in the digital age. We were joined by panelists Miranda Bogen, the AI Governance Lab Director at the Center for Democracy & Technology; Sam Gregory, the Executive Director at WITNESS; and Eli William Szenes, the Political Director at Public Wise.

Miranda Bogen's headshot.

Miranda Bogen (she/her)
Director of AI Governance Lab, Center for Democracy & Technology

Read about Miranda

Miranda Bogen is the founding Director of CDT’s AI Governance Lab, where she works to develop and promote adoption of robust, technically-informed solutions for the effective regulation and governance of AI systems.

An AI policy expert and responsible AI practitioner, Miranda has led advocacy and applied work around AI accountability across both industry and civil society. She most recently guided strategy and implementation of responsible AI practices at Meta, including driving large-scale efforts to measure and mitigate bias in AI-powered products and building out company-wide governance practices. Miranda previously worked as senior policy analyst at Upturn, where she conducted foundational research at the intersection of machine learning and civil rights, and served as co-chair of the Fairness, Transparency, and Accountability Working Group at the Partnership on AI. Her writing, analysis, and work has been featured in media including the Harvard Business Review, NPR, The Atlantic, Wired, Last Week Tonight, and more.

Miranda holds a master’s degree from The Fletcher School at Tufts University with a focus on international technology policy, and graduated summa cum laude and Phi Beta Kappa from UCLA with degrees in Political Science and Middle Eastern & North African Studies.

Eli Williams-Szenes's headshot.

Eli Williams-Szenes (he/him)
Political Strategist, Public Wise

Read about Eli

Eli is a queer political strategist with over 10 years of experience in state legislative leadership, campaign management and political landscape analysis. For the past several years, he has focused on running statewide and national issue advocacy campaigns on behalf of reproductive freedom and voting rights organizations. Eli has a particular interest in the ways using a racial justice lens impacts how we plan campaigns, from research to messaging and voter contact, and is always looking for new research on activating infrequent voters and expanding the electorate to better reflect the nation. He lives in Takoma Park, MD with his partner and children.

Sam Gregory's headshot.

Sam Gregory (he/him)
Executive Director, WITNESS

Read about Sam

Sam Gregory is an internationally recognized human rights advocate and technologist, and an expert on innovations in preserving trust, authenticity and evidence in an era of increasingly complex audiovisual communication and deception. As Executive Director of WITNESS, he leads their strategic plan to “Fortify the Truth,” and champions their global team who support millions of people using video and technology for human rights and civic journalism. He has testified in both the US House and Senate on AI and synthetic media and is a TED speaker on how to prepare better for the threat of deepfakes and deceptive AI.

In 2018, he initiated WITNESS’s Prepare, Don’t Panic initiative (gen-ai.witness.org) around deepfakes and multimodal generative AI – the first globally focused effort to ground these technologies in the realities of frontline journalists and human rights defenders and which has directly influenced platform policies, the design of emerging technologies for trust, and public discussion of who and what to prioritize.

Sam served on the International Criminal Court’s Technology Advisory Board, co-chaired the Partnership on AI’s Expert Group on AI and the Media and led the Threats and Harms Taskforce of a leading coalition to develop standards for media provenance (C2PA). Recently he published ‘Fortify the Truth: How to Defend Human Rights in an Age of Deepfakes and Generative AI?’ in the Journal of Human Rights Practice. He holds an MPP from the Harvard Kennedy School and a PhD from the University of Westminster.

Browse More Events

The State of Product Equity

The State of Product Equity

Product equity experts from Adobe and Creative Reaction Lab join the Tech Accountability Coalition for a one-on-one conversation.

Decoding Diversity

Decoding Diversity

Where does diversity, equity, and inclusion stand in tech? Get a first look at aggregated workforce data from almost 50 US-based companies.