hey everybody I’m Josh uh I work here atAsen on the director of AI and democracyand have been doing some work uh on thetopic that we’re going to discuss todaybut before I kick it over to oneoffintroductions really excited to haveMiranda here who’s the founding directorover at CDT of the AI governance lab Ithink is what we’re calling it uh Sam isthe executive director over at witnessand has been leading a lot of work catchTed Talk multiple TED Talks at thispoint I don’t know catch just Ted Talkum at the intersection of informationfactuality public trust democracy andhuman rights really uh and then Eli is apolitical consultant does a lot of workuh has done a lot of work for a numberof years in the kind of ReproductiveRights advocacy space but has worked ata lot of uh National and Statewidecampaign so I’ll kick it to you for kindof first introductions uh for everybodyand then we’ll get into thesubstance thanks Josh so nice to be hereum I’m back I was here last year umunder the offices of my role at metawhere I worked um on the policy teamwith the responsible Ai and Equity teamsand so it’s nice to be back in adifferent context um my work is reallyabout translating what the policyconversation is around AI topractitioners because we know that thatGap can be vast um and that theimplementation of some of the you knowpolicy ideas um can really make or breakwhether those ideas are are effectiveand obviously particularly at thismoment in this year there’s so manyelections um AI is is you know shapeshifting every day um that’s a huge partof my role too although CDT has teamsfocus on Election specifically so I’llbe bringing in some of their expertisedo I’m going to talk about anysubstantive stuff or just intros justintros at this point we’ll get pass iton hi everyone really glad to be here soSam from witness we’re a human rightsorganization we work globally withFrontline communities of activism andjournalism who are using uh the digitaltools of video social media the mobileweb um for their work um so we’re veryfocused on how they are optimized tomake sure they work for those purposesin a way that um is Equitable in inAccess in usage and in security um forthe past five or six years we’ve workedon a range of emerging technologies thatwe see impacting the capacity of peopleto be equal participants in sharinginformation and being trusted um and sothat’s led us to work on deep fakes andsynthetic media and Ai and of course allof that all of those chickens are cominghome to roost at the moment um and sohow do we respond to that is very muchwhere we’re focused at the moment from amode of prepare Don’tPanic I’m Eli I’m really happy to behere thank you um I come at this fromelections um I’ve been working inelections and sort of in the guts ofElections for a number of years um Imostly work in a a pretty interesting uhlittle corner of Elections which isindepend dependent expenditure campaignsum if you’re not familiar which is wherean entity that is not the candidatethemselves puts on a fully parallelcampaign to get a candidate elected umI’ve done that for Planned Parenthoodfor a number of years I’ve done that fordemocracy reform organizations for anumber of years and I’ve also been acampaign manager for candidates um guesswhich one is better not working withcandidates um and uh uh the thedifference between us is that mybackground in the way that AI Works inCivic spaces and in election spaces isthat it is it is principally about whatgives us agida and what we are reactiveto um and what damages our efforts umand I think the election space is juststarting to move into a frame of how canthis help us instead of how can wecounter Bad actors um so I’m excited totalk about all that really excited toget into kind of the the the topic RITlarge but before we do on the AI sidespecifically I’m curious to kind ofground theconversation framing out elections as afoream function as a ground zero fordifferent types of Technologies and whatwe may have learned or not learned uhover the years and so actually Eli we’llstart with you um curious what yourthoughts are it can go back as far asyou would like it can be a decade it canbe a hundred years but intersection ofnew tech new social adaptation andelection specifically yeah so um he’squeing me up cuz I told him somethingbefore this um we made it happen um so afun fact that is not well known is thatcampaigning for elected officeoriginated in the United States um andit originated just shortly after therevolution it used to be consideredunseemly to campaign um people who werecandidates for high office whether itwas governor or president um it wasconsidered ungentlemanly of them to goaround asking people to vote for them umand that all changed in the late 1700 inearly 1800s when Jefferson and Burr andHamilton all started printing thingsthat they would post around cities andthat they would print in um Urbancirculating newspapers to advocate forthemselves and their candidacies and itwas considered shameful it wasconsidered grotesque but among the manythings that’s interesting about it isone of the things that was consideredwrong was that it targeted Urbandwellers they were posting theirmaterials in urban spaces they wereputting their advertisements in urbannewspapers and there was a major concernthat it was leaving out like thegentleman farmer variety right obviouslythere are tons of people who don’t livein urban spaces who are not gentlemanFarmers even then but in their framethey’re talking about people who at thattime had the right to vote um and it wasconsidered a real problem that wasdriving a wedge into the already sort offractious young Republic um becausethere was already that tension betweenare we an agrarian economy or will webecome an urban industrial economy umand while they were fighting aboutwhether that was a good or bad thing itended up not mattering because it justtook off right that’s how we didelections from then on in this countryso one of the really interesting earlylessons in our electoral history is thatTech Innovations morally neutral goodbad or ugly you introduce them to anelection they’re staying in an electionand so the question is how do we moveforward withthem go down the rowhere um so I I think when I think abouttech as a forcing function I think aboutsome scenarios where we’ve seen it veryproactively in the context we work inand witness Works globally including inthe US um on um and and I’m going toparticularly look at through the lens ofsocial media so you know very proactivethings like putting together War roomsnot a great phrase but election Warrooms to prepare resourcing journalistsresourcing fact Checkers those are allproactive but they’re frankly at themargins of the problem there arereactive Solutions and um a lot of mywork is and I’m not to say this on apanel on elections is decenteringelections and looking at what’s nothappening during the elections so Ithink about obviously you know responsesto things like the Myanmar genocide andtrying to think about how we improve AIbased classifiers but also really thinkabout how content moderation works verybroadly uh maybe more positively wemight look at something like WhatsAppmessaging and limiting forwards rightthe kind of reacting to something whereyou seeing pervasive spread ofmisinformation so following a problemlet’s try and come up with a solutionum and I want to put the layer that wealso encounter and I think willencounter around Ai and we are seeingfor example in the Indian elections ofcoercive action by governments um usingelections as a forcing function oncompanies and on civil societies we’vegot proactive reactive and coercive andI guess what we want to put out of avision is much more how do we have moreof the proactivity in a more inclusiveway that is also more structural and Ithink that’s when I look at kind of myexperience of looking at Techparticularly around social media whichis what we got to learn from what wedidn’t do well as we move into the AIera is how do we move away from theparts of that were reactive too late ortemporary and notstructural from you know my perspectiveI think elections especially in the USalthough certainly not exclusively are aquintessential Equity issue because it’svery much about who um is given theagency to have their voices heard andwhat are the structural issues thatprevent people from fully participatingin in uh in the institutions uh thatwill enable them to do so and there’sthere’s so many lessons to learn aboutthe ways in which um we have you knowmade progress and and regressed inEquitable participation in politicalprocesses over you know the thecenturies here that I think um from aproduct and Technology perspective wecan learn from how do we know if thereare disparities what are the structuralissues that we might not realize or somepeople some people might not realize arepreventing people from participating andother people were very very consciousabout those choices that they made umyou know lots of parallel lessons tolearn and then on the flip sidetechnology you know informationtechnology has always played theshifting role uh in um enabling peoplewho are not in positions of power tohave their voices heard but then beingco-opted by the the forces in power toum continue the status quo or shut downthose voices and each new iteration ofTechnology kind of goes through thatcycle and you know political campaignsin particular have tended to be on theForefront of some of the newerTechnologies especially recently theObama campaign you the um what was hisname got on the tableHoward uh and and now ai you know therethese tools enable campaigns to scaletheir messages to personalize theirmessages um in ways that they they seebeing useful and for for candidates whoum wouldn’t otherwise have the resourcesto participate that could be reallygreat and for for candidates who um youknow have designed on what electoratethey want to to come out and vote forthem with a grander design toward Whatpolicies they ultimately want like thatcould end up being a tool to perpetuatethe inequities that Society uhexperiences right now so I think thatyou know both that that Legacy ofElections as an equity issue and therole that technology plays in shiftingpower are are just a great instigatorfor the conversation Randle we’ll stickwith you them for a minute I mean ourbackground we we both were in Techpolicy before and within the companiesthemselves and I’m curious how you seethe forcing function aspect of Electionsinternally as shaping the prioristructure the challenges that Society uhsurfaces and doesn’t surface uh aroundkind of seminal election events how doyou think that Tech policy people shouldbe appropriating that thinking aboutthat as we head into November in theUnited States so as I’m sure everyone inthis room uh uh feels viscerally thereis a lot to do within these companiesand the name of the game isprioritization and there’s alwaysquestions around what’s going to end upbeing prioritized what resources will beallocated to what and unfortunatelytimeliness and public attention is oneof the factors that leads to thoseprioritization decisions within that youknow tech companies and this is someresearch I did back when I was uh um inan academic context um there’s aspectrum of the motivations they have tomake policy decisions to makeprioritization decisions productdecisions Etc from like intrinsicallyunderstanding the role they’re playingin society and especially the socialplatforms uh in their early days weretrying to take on this grander um rolein society and mediating kind ofinformation and appreciated you knowthat the extent to which that that wasthe case um in other cases they justunderstand that in order to havelegitimacy with the broader Market withtheir users with the the public and withum policy makers they need to um youknow be sensitive to that role and andkind of make reasonable efforts to makesure they’re not disrupting somethingand in other cases it’s more about likeCrisis management um and and fear thatyou know something acutely wrong ishappening and they’re going to be toblame um so you know that we see sort ofdifferent flavors of that acrossdifferent companies um and acrossdifferent levels of maturity ofcompanies so um companies that have beenaround longer have sort of publicstakeholders like look different um butyou know regardless of the motivationthere I think uh these big publicmoments do Drive attention and umresources get get paid to them I thinkwe’re seeing less resources on electionsthis year unfortunately even though it’sthe biggest year for elections and youknow sometime and for sometime uh in thefuture and it could be pivotal year andso um the question is you know how canwe leverage that attention to focus notonly like what is the infrastructure wecan build with that attention that’s notunique to elections but that will helpin that you know everything that’s notelections that still are reallyimportant and also how do we make surethat we’re not just focusing on the mostprominent National elections in thebiggest markets because the the realchange will come in the longtail ofeveryone else who isn’t being paidattention to you know if there’s asynthetic media about them no will notno one will notice it’s in differentlanguages it will go unaddressed but itcould still have you know an enormousinfluence on Society on local policy umwhich can bubble up into National policyand so if we only pay attention to thetop of the iceberg um we’re going tomiss out on a whole lot of reallyimportant impacts so you know I think wetake the attention on the top to tobuild out something that will help withthat whole ecosystem going to throw anAI layer onto all of it but before we doanything more uh either Sam or Eli onthe kind of forcing function the waythat we should think about elections andchanneling that for positive change onthe equity front thiscycle no nothing more forme um you know the only thing I wouldadd to this is that coming off of whatMiranda is saying um as as as with thatwith elections the top is always whatpeople are paying attention to um forplenty of fine reasons um but it is infact the further down you go on theballot the more you are talking aboutthe people who create the laws thatimpact your life on a daily basis rightlike something like 80% of the laws thatpeople interact with on a daily basisare created by state or locallegislators not the federal governmentum and that’s to say nothing of ofcourse in A Moment Like This the rolethat state legislators are playing asback stops against a lot of work um andthat that those Tech Innovations and theway that they can funnel down tocampaigns in really positive generativeways do not make it to local Races theyare expensive the Consultants who teachyou how to use them are expensive andlocal races as much time as we spendthinking about how much money is spenton top of ticket races local races arecheap which can be great if you are agood actor and want to get involved umbut they are absent of the resources tomaximize the the tech innovations thatwe have if you are trying to as oneshould at this time flip the ArizonaState Legislature um there is no chancethere is no possibility that the peoplewho are running those campaigns haveaccess to the tech that could make thosecampaign plans more efficient that couldmake their voter targeting moreEquitable and just based on their valuesystems there’s they have no access toit and so one of the things that wouldbe a great step forward in the CivicTech partnership space is figuring outhow to marry those topof ticket advanceswith those bottom of ticketopportunities that’s a great pointso it’s been what 18 months or so sinceopen AI rolled out chat GPT and we allstarted talking about generative Ai andthere was this kind of new set ofquestions some of which were justreframings of old questions that we’vehad around information Integrity who’sleft out of conversations uh indifferent information silos news desertsthings like that and how that might beexacerbated I’m curious uh we’ll startwith you Sam what should we be watchinguh there have been a number of Electionsto date the interaction and use of AI orthe lack thereof what should we benoticing right now that should informour perspective going forward yeah andyou know we’re we’re four months intothis sort of Omnibus election year and18 months into sort of the generative AIbeing broadly available broadlycommoditized and getting increasinglyeasy to use and I think we all aware ofthose technical trends that underly thisso less data required to createsomething much more access to it andvery significant improvements in imageand audio right and so the way we’reseeing that play out and I think it’sreally helpful as we sit here in the USat the moment and think about the USelections also to look at thecomparatives so if we look at whathappened in Bangladesh PakistanIndonesia what’s happening now in Indiayou actually see far more widespread useof for example deep fakes and syntheticmedia than here right and so uh thereare potentially some reasons why that isthe case and why it might not be thecase here but I don’t think we shouldmake too many assumptions so if we looklet’s let’s take some of the ways we’reseeing it play out at the moment in inglobal context there is extensive use ofAI to create personalized messaging fromcandidates you know over 50 millionmessages sent in India where candidateshave created avatars where they speakdirectly to people use their name umhave lipy sync dubbing that matches itthat is a widespread industry that iskicking off with very little regulationand very little control it’s notpersonalized the individual I thinkthat’s a fear Mong of like we get thesemassively personalized like interactiveAI that is not happening but it is youknow sort of distributed messaginghappening a lot of campaigning that’sabout humanizing candidates in ways thatsort of nudge the edges right likemaking cuddly avatars of Indonesianpresidents making your your primeminister sing and making thatparticipatory kind of what someonecalled Soft fakes R and chowdery um andand then of course people using it toattack and deceive and right and I thinkit’s really important and Josh you werementioning like this is grounded inexisting Dynamics one of the existingDynamics this is grounded in is genderbased violence and uh non-consensualsexual images right and we’re definitelyseeing that it’s also nuanced right sofor example in Bangladesh candidateswere being placed in swimwear right umrather than being nudi right which isobviously you know a very pervasive andchallenging phenomenon but we’re seeingthat expanding it obviously targetspublic and private figures and then youknow much more deceptive ones in a veryexplicit election context right so beena characteristic of like the electionsto date this year of having candidatesin the last few days appear to saythey’re boycotting an election or askingyou to vote for the other candidate orwith drawing that happened in TaiwanIndia Bangladesh Pakistan right so verycharacteristic um and of course theseaudio deep fakes that we’ve experiencedin a very minor way in the US honestlywith the Biden Robo call in some sensecompared to some of the ways we’reseeing it and so you’ve got this widerange of actions and it’s not quite thehouses on fire honestly from myperspective but it is very significantas a trend line and the thing and I hopewe’ll come back to this of course mostof the counter measures either areproduct or a policy level are either notthere or are inadequate to serve bothGlobal populations and a broad diversityof the US population who is going toengage in elections this year is it fairto say that abroad we’re seeing theseTech tools used to try to motivateBehavior rather than to demobilize atthis point or we not sure yet I I thinkwe’re not sure um I think there are veryexplicit attempts to to get people notto vote which also include influenceoperations right there’s clearlyevidence of sort of influence operationsusing often quite blunt AI imagery inTaiwan and in the US context um so Ithink it’s a mix of stuff and I thinkyou know we’ve got questions as we lookahead about what it means to have avolume of this content a personalizationthat we haven’t yet grappled with that Ithink rais really troubling questionsabout demotivating voters the otherthing that we see and and I think thisties into the lack of solutions isaround the absence of ways to detect andunderstand this content that allows sortof plausible deniability so it’s a realchallenge that people dismiss realcompromising content as they have beforeright this is not novel um as beingfaked and sort of place the burden ofproof on on publics to to disprove thatso really great Point Miranda same kindof question what should we be noticinguh especially as we approach November inthe UnitedStates well I think you know ascompanies are building more generalpurpose tools and not just generalpurpose platforms which have largeimpacts but these tools that can do somany different things and require a muchmore defensive approach to preventingall the bad things they can do I thinkwe’ll see you know unexpected ways youknow people are talking about umsynthetic media they’re talking aboutTarget targeting they’re talking aboutum they’re talking about uh votersuppression things like that but thereare probably going to be all sorts ofthings that people didn’t anticipate andthe trick with this and thinking about ageneral purpose technology and makingprioritization decisions around whatyou’re going to address is that questionof like what can we imagine could gowrong and can we build something todetect it and prevent it and usually thefirst stop of any you know multinationalcompany is like English maybe Spanishmaybe another language um because that’sthe the most prominent and noticeablesort of set of issues and that justleaves so much on the table so if we canif we notice something in the US orEnglish context at all um we can’t wedon’t know that we will we certainlywill not uh in other languages if we’renot thinking about that holisticallyfrom the beginning but do the teams havefolks who can imagine what that lookslike in you know an internationalcontext can they imagine what it lookslike to disenfranchise um disabledvoters for instance I was part of a aPlanning Group for this uh election redteaming event where we tested a bunch ofthe different um interactive sort of llmuh tools on reasonable queries thatusers might make about voting and inabout 70% of cases I think it was theanswers were inaccurate but not in waysthat were obvious they were mostlyaccurate except for the piece that waslike determined the meaning of theoutcome but you wouldn’t know unless andwe had at the tables in this red teamingexercise election officials from thejurisdictions about which the questionswere being being asked and only byhaving them sitting at the table theycould be like that sounds reasonable butthat that’s actually wrong it’s outdatedor there’s a court case um or you couldimagine how someone would think that butlike that’s not the advice we give tothe the poll workers and so figuring outwhat the uh threat scenarios are is isdifferent than in other products whereit’s just like is it failing you knoware people um you know falling off oryou know are they being deniedopportunities or something which isstill hard to detect we don’t have thebest ways to do that but um it’s atleast a little bit more tangible and inthese cases the the issues are moresubtle and more cumulative and you knowI I think we’ll get pretty far alongbefore we quite realize the issues thatare coming up so that’s the biggestthing I worry about is there’s a lot ofattention to the like biggest andloudest and most obvious issues andstill not solutions to them but there’sa lot more that’s going to be coming upthat we’ll need to be ready to tackleonce it sort of takesform youium a prediction that I hope is wrong butI think is right um is that I wouldwager in the American 2024 context andI’m sure 26 and on um most of theadvances the meaningful Tech advances uhparticularly in AI will probably use beused for suppressive efforts not formotivating efforts um we’ve seen italready uh in in plenty of ways andunfortunately the thing that’s true isthat Tech advancements that begin asmotivational are always eventuallyco-opted and turned into somethingsuppressive right so there’s a directline between from Howard Dean to theObama data boys to Cambridge analyticaright it’s just like one two three um soI think that from the Civic space andthe election space the thing to do is tonot try and turn ourselves into Techexperts and to build as strong arelationship as we can with folks in thetech sector because if we can walk inassuming that the efforts will besuppressive in nature um then we can dothe background work because we know whopeople try to suppress that’s no mysteryin this country right we know who we’resuppressing when we’re suppressingvoters um and instead engage in thereally and you know on the one handwe’re late at it on the other hand nevertoo never too late to start um the workof understanding the um the things thatgo into making suppressions so easy forsome communities um particularly and Ithink this is where the AI aspectinteracts in in the scariest way for meas a as an election practitioner um thelack of trust in the system um in incommunities that are traditionallydisenfranchised is growing right there’sgood research on this um when you lookat a community that has measures ofdisenfranchisement and you can even givethem like disenfranchisement quotientsright as the disenfranchisement quotientgoes up the trust in the system and thefaith in the electoral process goes downum and something that election folkshaven’t quite figured out yet um is thatyou know the the addage that if you’reblack in America you got to work twiceas hard to get half as far we need totake that on in in the reverse in thepositive obligation sense and think ofit as if you are trying to motivatedisenfranchised voters then you need towork twice as hard to get half as far inupping their faith in the process andthe system um and that means that youneed to go in really early with anunderstanding of the ways they’ve beentargeted in the past that is where GreatTech Partnerships could be superproductive um and work in Tech and nonTech ways at addressing those reasonswhich are almost always very valid thatpeople lack faith in the process in thesystem Eli let me tie that to thetechnology so llms one of the keyadvances is in the text side of thingsnot necessarily multimedia and in thelanguage capacity right communicatingeffectively compelling with the rightidiomatic expressions in a language thatyou have no expertise in was really hardfor bad guys before very very easy nowso you could see something like a majorWhatsApp campaign targeting uh Arabicspeakers in Dearborn saying to boycottthe election and is that the type ofthing that you’re talking about groundedin the tech partially yes so and indeedwe have seen that right um we have seenMass WhatsApp chains deployed in Georgiaum in 2018 and in 2020to try and demotivate and suppressvoters in Asian-American communities umthat were in language and that hadeverything like the aura of authenticitythey looked like sort ofgrassroots-based mass mobilizationefforts that were in fact demobilizationefforts um and then the other the otherproblem is and it’s I’m not a techperson so I’m going to go high level onthis the the it is impossible to oversethow scared of deep fakes people in theelection space are um because they havethe ability to confirm everyone’s worstfears and democracy functions best whenwe are not running on our worst fearsdemocracy functions best when we areworking on our highest hopes um but ifyou create a system where everyone mightsee the candidate they are on the fenceabout and that person is saying exactlythe thing that they fear that personreally believes and really willImplement you’re not going to get him tovote for the other guy you’re just goingto get him to stay home right um sothat’s really fear that there is thereis such a there is such a narrow TippingPoint for an impactful enough portion ofthe electorate or I want to point outthis part’s really important thepotential electorate because we havecriminally low participation rates inthis country um it’s just there are somany good reasons for the the lack oftrust and faith in the system that anyany deep fake can just tip a wholecommunity over into notparticipating um and it’s it is the talkof of the town if you sit down withpeople who are working on electionsright now and you ask them what keepsthem up at night nine out of 10 answersdeep fakes just terrifies people Samthere’s the sociological aspect of thissome of what Eli was getting at um andthen there’s the technological side onething you know back when I was at metasay in the leadup to 2020 the publicexpectation and conversation that wasbeing had was what are you all going todo in the information Integrity spacebut because all the focus has been on AItools the question is now what are yougoing to do to technically detect andtherefore label this type of contentwhich are fundamentally different thingsand you have been doing some interestingwork around the concerns on the erosionof trust the Liars dividend could youfill out a little bit of thesociological side for the folks hereyeah and I think Eli what you’redescribing there is I mentioned the ideaof plausible deniability which is theidea that you know because of photorealism audio realism it’s very easy toclaim that something is uh real whenit’s falsified um and this plays into atechnical Gap that is worth reallythinking through because it’s filledwith concerns around equity and accessbut I also think there’s plausiblebelievability which is the actuallyprobably what we see more commonly andit’s a Trope of what we already have inMiss and disinformation is people leaninto what they want to believe about anindividual or within their politicalViewpoint and we’re seeing that withdeep fakes so I think we need to thinkabout both sides of this sort ofplausible believability plausibledeniability and it is linked totechnical questions right because uh oneof the things and you know witness worksat multiple levels we do policy work wedo work with companies on the technicalelements and product elements but wealso run a rapid response mechanismaround people who have cases ofsuspected deep fakes that appliesglobally so journalists and factCheckers bring it and the reason we runthat is that for most Frontline electionofficials journalists fact Checkersthere are not Reliable Tools to be ableto detect fakes to be able to explain itto their public in a climate wherepeople are skeptical about scientificcommunication where journalists and andothers are not well equipped to do thisand there’s a range of reasons whydetection tools don’t work well this isnot to to cast shade on uh peopleworking in this space it is hard tobuild reliable detectiontools but it’s also hard to buildreliable detection tools that are easilyexplainable to thepublic and we’re not investing in thedata sets and the tools that are builtfor the for the world and for the USrather thanEnglish-speaking um white individualslet’s look at audio and um and video andimage detection right so there is atechnical Gap there’s also a a mission avery obvious emission in terms ofdetection and what that means is it’svery easy to cast doubt on content umand we haven’t invested in making surethat those who are most vulnerable andthe communities who are most targeted bythis have the most access we’ve flippedit the other way around right so thecommunities that have the most access todetection are the ones who have the mostresources and perhaps are the most wellprotected to begin with and then ofcourse we have the authenticity side youwere saying right which is you know howdo we know where a piece of media camefrom and all the same questions applythere right this is the big sort ofthing that everyone is latching on tofrom a legislative side and from atechnical side is let’s show people therecipe of how AI was used to make apiece of content you’re experiencing butthat the decision around that is Ladenwith questions around privacy aroundaccess to Tools around how you buildthose in a way that doesn’t reinforcebias or existing problems and and Ithink that’s why this room needs toreally invest in that space now becausethe tools are not going to be here forthis election they’re not going to bewidely used but they will be here in2025 2026 and we are setting thefoundations for how we understand trustand who gets trusted in our societiesand that’s not just about deep fakesit’s about all content um and I thinkthat’s the key right now is to sort ofthink from that perspective even withthe right forensic tools it’s alwaysgoing to be a confidence interval rightwe are 70% sure that this is syntheticEtc so it’s always going to have anelement of trust we’re only ever goingto get to 85 to 90 with detection toolsso you got to work out how do youexplain 85 to 90 when people rightly areskeptical of the tools because they’renot great for many communities andthey’re not accessible and they haven’tbeen made accessible for thosecommunities it’s a great Point Mirandalike I’m curious how you’re thinkingabout the risk that people throw uptheir hands and say we’re going to votebased on Vibes gut instinct because wedon’t we don’t have facts we don’t knowwhat’s factual how are you seeing thatkind of risk set well I think the riskof Vibes is actually more pervasive thanjust in the elections and I think it’scoming up in all sorts of generative gengenerative AI contexts you know relyingon reinforcement learning from Humanfeedback relying on um you know sort ofthese alignment procedures which arelike basically cont moderation but lessstructured and less inclusive um andwe’ve forgotten the lessons that we’velearned from making sure that thosedefinitions about meaningful Conceptsthat affect people’s lives that thatwill be used to determine whatinformation they can get out of a systemor share on the internet um doesn’t fallinto the traps we’ve found before Ithink we’re forgetting all those lessonsand so um but we’re kind of throwingreinforcement learning from Humanfeedback at it and calling it okay andit’s like basically you know an adviserof my my team um his Dave Wilner ifanyone’s familiar um he he calls itVibes based content moderation and andso I think that the rush to build outthese exciting new tools um is leadingto shortcuts of all kinds and that isgoing to lead to uh you know reducedtrust in in all contexts you knowwhether it’s elections or otherwise butthere’s also some you know more basic uhyou know and and infrastructuralquestions that lead to that reduction oftrust that don’t have anything to dowith AI um you know my organization ayear or two ago did a report on websitesof election officials across the countryand found that I think it was less thanhalf of them ed.gov domain names and sothere’s some pretty basic interventionsthat you know could um createindications of a little bit more trustthat maybe would say yeah this is like areal thing and they’re the ones who Ishould get information from and not thisother website who’s saying I vote on theday after the election um and so in theequity context I think that that’s anapproach that I alwaysuh found to be useful to take becauselet’s you know there’s one approach umin thinking about equity and access toelections which is like the get out thevote to specific communities and youknow meeting people where they are andand that’s really important um butsometimes you know if it’s if it’s moreexpensive if it’s controversial ifpeople then presume you know a politicalmotivation for something that can createbarriers to actually accomplishing uhthat work especially you know from aplatform perspective and not from like aparticular political persuasionperspective where that’s expected um butthinking about what are those Upstreaminterventions that actually affectpeople’s ability to know what’s going onto access um information to access voteryou know voting sites you know Josh andI were talking about uh this phenomenonwhere a lot of voter uh a lot of pollingplaces are in churches which are notsubject to the Americans withDisabilities Act you know they do haveyou know some requirements uh aroundaccess but if they aren’t subject to itthe rest of the time like are theyreally going to be up to par everycouple years in this one day where wherewhere they are required to have moreaccess to like what are theseinterventions Upstream that are notnecessarily tied to elections but if weaddress them maybe that will help inthese key moments and it gets back toyou know is are elections like a goodmotivator for things and in one senseyes like timeliness and public awarenessuh can you know get resources thrown atthings on the other hand if we onlytackle things when there is that publicattention and awareness and we leave offthis long tale of like Upstreaminterventions that could actually helpin multiple contexts that’s leaving alot on the table and also things thatmight improve a whole lot of otherthings like education systems um youknow Community Services things like thatthat will increase trust withincommunities that will build kind ofsocial ties that are important to getout the vote later on and so thinkingabout it just in the election cycle umis going to be limited the cyclical kindof nature of of focus and in attentionand attention that binary go back andforth um I am curious about how you’rethinking there because while this is thefirst AI election right if if peoplesometimes characterize it that way thisis the most simple set of AI tools thatwe will ever see in any future electionright but the concentration the focusthe prioritization on this moment willdecrease after this election cycle sofrom a governance side you leave thegovernance lab I’m curious how you’rethinking about trying to maintain pushin those areas well I think thiselection is not only influenced by AIbut it’s also going to influence how wegovern AI lawmakers right now aredebating interventions related toelections related to um civil rightsrelated to thinking longer term aboutsecurity and and governance implicationsof AI and they don’t really know whatthey’re doing and some of those effortsmight not go anywhere some of them mightget passed but then they still need toget implemented and I think you knowwhoever kind of comes into into power inyou know local State nationalInternational contexts will be playing arole in governing this technologyand their political interests are goingto play a role in that as well howaggressively are we going to try andprotect against these issues if whateverworked to this time got got us intoPower um you know how much are we goingto focus on longer term you know futureiterations of AI systems that could bereally troubling that could have umquite a disruptive effect on Democracybut paying too much attention to themcan take attention away from theexisting issues that AI is alreadycausing to people communities in termsof access to Economic Opportunity jobsJS um credit you know like voice Etc umthat’s a big you know tension in thespace right now and so you know that’ssomething we’re paying attention to orwhat are these different proposals andwhat are the implications of them bothfor actually solving the problems theyclaim to solve which sometimes theywon’t they’re they’re just kind ofthrowing things at at the wall um orsometimes they’ll be quite positive butpeople don’t quite recognize um how someof the proposals even if they don’t seemto address the most acute issues willactually be critical to building astructure where we can govern technologyas Society where people can have a voicein how technology is governed eitherdirectly you know in conversation withthe people building it through codesignthrough um participatory uh uh venues orthrough you know their elected officialsand kind of having a voice in thatbroader context so it’s that you knowmulti-level uh kind of intersectionbetween the technology and andgovernance question that we’re thinkingabout Eli I mean Miranda said it youknow there are areas where this is goingto be acute but acute is community byCommunity right and we know thecommunities in the United States thathave been just historically picked on uhin terms of democratic participation anddisperate impact of these Technologiesand lack of focus around thosecommunities how should the folks in thisroom and watching in the kind of techsector be thinking about maintaining anawareness of how these risks Maymanifest vastly differently acrossdifferent communities yeah um you knowthe the most I I think the most honestanswer to that is a hard one which isthat it requires very localexpertise um you cannot figure out whatthe quirks are that cause people toparticipate or not participate or tohave access issues or to have accessease in southern Georgia unless you’refrom Southern Georgia that’s just that’sjust true or at least working insouthern Georgia um and so thethe most hopeful thing I could imaginewould be that the tech companies thatare really leading at the Forefront ofthese things are doing the unsexy workof reaching out to the most local levelpeople who are administering electionsand also just administering like Civicstructures um to figure out at thatgranular level what the potential is formessing things up there um we you knowwe know that like state by state isdifferent we’ve accepted that at at thecampaigning level we understand thattalking to Michigan is not the same astalking to Florida um but we do not weun unless you get to really supergranular political operatives who maketheir living going from like low cal tolcal there’s a real lack of appreciationof the distinction between talking to anenclave in Southern Florida versusCentral Florida or talking to someonewho is in rural New Mexico versus ruralArizona Ral New Mexico and Rural Arizonaare really different they’re reallysuper different in the things thatmotivate them the things that agitatethem um and the best case scenario wouldbe a lot ofreally granular close Partnerships thatare it’s pushing a rock up a hill um butit’s the way you get the rock to the topof the hill um even if it takes a verylong time um that would be my hope italso requires a lot of effort sort of atthe middle level like the stateadministrative level to compel thoseFolks at the very local level to engagewith the tech companies because i’ I’veworked in a in a state and localgovernment um and for every local andstate elected official who is justthrilled to pieces to get Outreach froma tech company there is another one whois rolling their eyes and doesn’t havetime and doesn’t understand and knowsthey don’t understand and doesn’t enjoynot understanding things and has justbetter things to do um so some midlevelincentivization of those very localfolks to want Tech Partnerships that areeducational would be very very veryproductive and that’s been anencouragement from Civil Society for areally long time and I’m curious Sam youknow given that that has been what wehave been hearing you need localPartnerships you need to actually listento the people who are talking to realhumans what are the lessons that youthink are most important that we kind ofcarry into this new era the things thatwe should have learned wish we hadlearned over the course of social mediadevelopment in the intersection ofElections for the last decade plus and Isuspect I’m I’m preaching to the choirhere I think we we know these things andthat is part of the problem right is weknow what the problem is it’s how do weimplement it so I say this with humilityto this room as well that I think how wereally make sure we’re centering fromthe start the communities who and thisis to Miranda’s Point there’s a lot ofhypothetical Harms in Ai and includingin generative AI communities that faceexisting AI harms or the pre-existingharms that AI is exacerbating genderbased violence surveillance Miss anddisinformation exclusion and votersuppression right how do we Center thembecause the experience of social mediais an afterthought um and theinfrastructure was built before and Ithink that would be my key point to makeis to have ai built in a way that isgoing to enable Equity requires thispipelineresponsibility across the ecosystem andso in my world there’s a lot of emphasison platform accountability and I thinkit’s important to look at the outwardfacing part of platforms like you knowInstagram threads X YouTube whatever butin fact to look at this we have to lookat the whole AI model ecosystem if we’regoing to have things like robusttransparency robust ways to detect thatthey’re accessible to all so I would sayit’s the centering of the communitieswho know the harms because they’veexperienced them because they haveexperience of how to counter them aswell and then after that how do we buildthis very robust pipeline responsibilityotherwise we’re just going to betinkering on the margins and there’surgency to act this year uh but I wouldencourage not haste because I thinkthere’s a lot of legislative proposalsthere’s a lot of attempts to do stuff inthe elections context now but we’relaying technical foundations that’regoing to have implications in far worseyears or more complex years or moreadvanced Years Around AI in 2025 2026let alone the second half of 2024 and ifI could just add that even you know whencompanies have Equity teams teams likeuh Civic technology you know electionsteams who really get the contact and aredoing good work Civil Society hastrouble trusting that that is going tolead to systemic change across thecompany and and that other teams youknow other other priorities won’toverwhelm it and so even when they doget to engage like they’re skeptical andthey’ll see any other signal from thecompany that contradicts thoseconversations as as undermining you knowas sort of a going back on the promisethat that conversation had and so that’sa real challenge for for I think thisroom to say how do you actually engagewhen you’re not in control of theorganizations that you sit within andyou know at at any moment somethingcould come up that just kind of sweepsaside good work that’s been going oneven if that work is going to continueum it’s it’s It’s Tricky and figuringout how do you localize but also scaleis also you know another thing I’m sureeveryone’s dealing with speaking fromexperience there you are Miranda um sowe begin to wrap up kind of this side Iwant to go kind of down the row a lot ofthe folks here they’re fighting the goodfight from within major companies I’mcurious what you each think is thepositive Vision that if we were lookingback five years from now saying wow weaccomplished this what should this havebeen in this moment where we’reintroducing these new tools they’rebeing picked up by good actors and Badactors alike what is your hope for thefuture especially the people who areassembled in this room buildingit start with you Eli because that wasjust a lot it’s a softballum uh I think my hope would be thatthere is a real effort to build veryvery strong Tech literacy into thoselocal communities um that are um TheCenters ofof Civic suppression of of voterdisenfranchisement that there’s a realinvestment in teaching not just a techliteracy but a tech expertise to thepeople coming out of those communitiesbecause the the closer we bring thosetogether the better we’re all going tobe the the more you remove the middlemenin Translation the better so like mydream would be that in a couple of yearsyou’ve got people who are kids now umwho are you know graduatingfrom the University of Austin or theUniversity of Georgia or like whereverum who are who are deeply learned in allof the things and the tools that we aretalking about who are also deeply rootedin their communities um who can be thosebridges um because there is an extent towhich the tech aspect of this like thethe the back end of the AI stuff and Iso appreciate everyone’s enthusiasm andthinking that we’ll get to like an85% place um it means literally nothingto most people um it’s a zero or 100question for the vast majority of peoplewho are looking at these materials umand the zero or 100 is to bring back theword almost entirely based on Vibes umand there is nothing more Vibeoverpowering than a human being who youknow um so that would be my hope is tojust bring them as close together aspossible so you have deeply rootedtrusted expertise in those communitiesSim um I I spent a lot of time worryingabout the negative effects and I and Ithink we should be very worried aboutthe negative impacts here if we want avibrant usage and integration ofparticularly generative AI in in anumber of years time you know we’veheard from communities we work with thatthey see it as a path to access toknowledge to creativity to more diversestorytelling the ability to communicatethose are all positive Visions but theyall depend on the fundamentalfoundations that are being laid rightnow around how we design theseTechnologies and where we place theemphasis and who is included there andso it’s just a critical moment right nowto place that there if we want to seethat Vision that is about a vibrant moreenabled communicative economy um but I Ithere no guarantees towards that thatVision that that we hear peopledescribingMiranda I think unlike you know previousimaginations of Technology there’s morethere’s and broader awareness that thenewest technology is going to have quitea disruptive effect and for people to bethinking about what those effects areand preparing for it and so that’sthat’s positive the challenge is it’smoving faster than ever and so do wehave time to actually um take action onthat knowledge um in a way that’smeaningful and not just sort of sayingthere’s going to be issues in everylanguage let’s use automated likemachine translation to translate ourmitigations from English to everythingelse and like we’ll be or like that thatwill will have to be enough and so Ithink trying to figure out how to umbring that awareness of the issues thatthat I think even like leadership ofthese organizations have they know whatcould go wrong to to get them to turnthat into buying some time for forfiguring out some of these issues and ifeveryone can kind of work together alittle bit maybe the market pressure toget things out the door you knowextremely fast will will go down andeveryone will have time to do that butthat’s going to be the the biggestchallenge please thank our panelists[Applause]
The Tech Accountability Coalition, part of Aspen Digital, hosted a virtual session on Wednesday, May 1, about the potential impacts of AI on our democracy.
Moderator Josh Lawson, Director of AI & Democracy at Aspen Digital, led experts at the intersection of tech and government through a discussion of the challenges and opportunities AI presents for maintaining the integrity and fairness of elections in the digital age. We were joined by panelists Miranda Bogen, the AI Governance Lab Director at the Center for Democracy & Technology; Sam Gregory, the Executive Director at WITNESS; and Eli William Szenes, the Political Director at Public Wise.
Speakers
Miranda Bogen (she/her) Director of AI Governance Lab, Center for Democracy & Technology
Read about Miranda
Miranda Bogen is the founding Director of CDT’s AI Governance Lab, where she works to develop and promote adoption of robust, technically-informed solutions for the effective regulation and governance of AI systems.
An AI policy expert and responsible AI practitioner, Miranda has led advocacy and applied work around AI accountability across both industry and civil society. She most recently guided strategy and implementation of responsible AI practices at Meta, including driving large-scale efforts to measure and mitigate bias in AI-powered products and building out company-wide governance practices. Miranda previously worked as senior policy analyst at Upturn, where she conducted foundational research at the intersection of machine learning and civil rights, and served as co-chair of the Fairness, Transparency, and Accountability Working Group at the Partnership on AI. Her writing, analysis, and work has been featured in media including the Harvard Business Review, NPR, The Atlantic, Wired, Last Week Tonight, and more.
Miranda holds a master’s degree from The Fletcher School at Tufts University with a focus on international technology policy, and graduated summa cum laude and Phi Beta Kappa from UCLA with degrees in Political Science and Middle Eastern & North African Studies.
Eli Williams-Szenes (he/him) Political Strategist, Public Wise
Read about Eli
Eli is a queer political strategist with over 10 years of experience in state legislative leadership, campaign management and political landscape analysis. For the past several years, he has focused on running statewide and national issue advocacy campaigns on behalf of reproductive freedom and voting rights organizations. Eli has a particular interest in the ways using a racial justice lens impacts how we plan campaigns, from research to messaging and voter contact, and is always looking for new research on activating infrequent voters and expanding the electorate to better reflect the nation. He lives in Takoma Park, MD with his partner and children.
Sam Gregory (he/him) Executive Director, WITNESS
Read about Sam
Sam Gregory is an internationally recognized human rights advocate and technologist, and an expert on innovations in preserving trust, authenticity and evidence in an era of increasingly complex audiovisual communication and deception. As Executive Director of WITNESS, he leads their strategic plan to “Fortify the Truth,” and champions their global team who support millions of people using video and technology for human rights and civic journalism. He has testified in both the US House and Senate on AI and synthetic media and is a TED speaker on how to prepare better for the threat of deepfakes and deceptive AI.
In 2018, he initiated WITNESS’s Prepare, Don’t Panic initiative (gen-ai.witness.org) around deepfakes and multimodal generative AI – the first globally focused effort to ground these technologies in the realities of frontline journalists and human rights defenders and which has directly influenced platform policies, the design of emerging technologies for trust, and public discussion of who and what to prioritize.
Sam served on the International Criminal Court’s Technology Advisory Board, co-chaired the Partnership on AI’s Expert Group on AI and the Media and led the Threats and Harms Taskforce of a leading coalition to develop standards for media provenance (C2PA). Recently he published ‘Fortify the Truth: How to Defend Human Rights in an Age of Deepfakes and Generative AI?’ in the Journal of Human Rights Practice. He holds an MPP from the Harvard Kennedy School and a PhD from the University of Westminster.
{"includes":[{"object":"taxonomy","value":"135"}],"excludes":[{"object":"page","value":"203847"},{"object":"type","value":"callout"},{"object":"type","value":"form"},{"object":"type","value":"page"},{"object":"type","value":"article"},{"object":"type","value":"company"},{"object":"type","value":"person"},{"object":"type","value":"press"},{"object":"type","value":"report"},{"object":"type","value":"workstream"}],"order":[],"meta":"","rules":[],"property":"","details":["title"],"title":"Browse More Events","description":"","columns":2,"total":4,"filters":[],"filtering":[],"abilities":[],"action":"swipe","buttons":[],"pagination":[],"search":"","className":"random","sorts":[]}