n[Music]okay hello and welcome I’m Karen yimo Iam the dean of the school of interinterntional public affairs and the ALStevenson professor of internationalrelations well I cannot think of a timeror more important topic to discuss thanthe role of artificial intelligence onDemocratic elections around the globeand we are thrilled to be partneringwith Aspen digital on this effort overthe course of the current year about twobillion people across 80 countries areexpected to go to the polls to castballot carrying out free and fairelections is difficult even under normalcircumstances but in the currentinformation ecosystem it is even morechallenging as we know the health of ademocracy is linked to the Integrity ofinformation and those who wish to spreadlies and myths anddisinformation have never had the toolsto do so at the speed and scaleartificial intelligenceallows no wonder that democracies arefacing stiff headwinds as autocrats fromMoscow to Myanmar challenge thegeopolitical order and are using AI totighten the grip uh on power and Ivividly Recall why what Maria ressa oneof ig’s Carnegie distinguished fellowstold last year’s graduating class of seastudents she described the lack ofgovernment oversight on thisTechnologies of these Technologies asDoomsday’s clock fordemocracy and we’ have already seendisturbing evidence of the dangers posedby AI to the Integrity of Elections inSlovakia for example there was deepfaked audio of a candidate who appearedto be conspiring to rig the election inIndo in Indonesia deep fake technologywas used to portray uh schw Sarto thecountry’s long-standing dictator who’sbeen dead for 16 years telling people tovotetoday and this is only the tip of the AIIceberg U these misuses of technologyare all happening as governments andtech companies are scaling back theirresources to police fake content andfightdisinformation today’s panels willexamine therefore topics including risksto 2024 elections worldwide lessons fromrecent Global elections and the role ofrespond responsibilities of the techcompanies we’ll also talk aboutimplications for this year’s uh uselections in particular and I know manyof you are interested in this we arefortunate really fortunate to have somany of the world’s leading experts andpractitioners on this subject with ushere today I am hopeful that today’sdiscussions gives us some reasons to beoptimistic for the future of AI anddemocracy although knowing some of thepanelists I’m not sure about theoptimism inpart um we’re already seeing some signsof progress and we should mention themEurope has passed promising newregulations to reig in tech companies inincreased transparency andaccountability and here in the UnitedStates lawmakers in 32 states haveintroduced 52 bills to regulate deepfakes in elections and we have a lot ofwork to do here in the United States andelsewhere and I hope the form thisconversation will move us forward inthinking about the problems thechallenges and the potentialSolutions so before I turn it over toour co-host I just want to thankeveryone again for coming andparticipating in today’s panels I wantto thank the staff who organized thislook this event is the embodiment of igpas an organization um we are discussinghere important topic that c that cutsacross so many of our Global challengeshere at Sea we’re focusing on technologyand Innovation and this is a big part ofwhat we will be looking at and doing uhwe’re looking in Democratic resilienceand here this is exactly in theintersection of these two and I’m veryexcited about the projects we havecoming up um in the pipeline looking atthose with our faculty with our fellowsand definitely more uh tocome uh this is again we’re doing thisbringing together the best of theprivate and public sectors to generatenew ideas and solutions that are basedon data andevidence and that is why the TheInstitute of uh uh global politics wascreated this is exactly the vision thatsecretary Clinton and I had when westarted this um how do we take this onwhat can we do to help and thisconvening cannot be more relevant andimportant today and with that it is mygreat honor to introduce our co-host oftoday’s event Vivian Schiller one of thehighlights in of this work uh that we’redoing here at igp is how we are joiningforces with Incredible Partners uh onmeaningful issues and when Vivien and Ifirst met we knew immediately that wewanted to collaborate we wanted to worktogether we were really passionate aboutthe same topics and I’m so glad that wewere able to do this and get to thispoint that we are doing this event todayVivian joined the Aspen Institute whereshe’s the vice president and leads Aspendigital just four years ago after a longcareer in the intersection of news mediaand Technologies uh with stints thatincluded presidents and CEO of NPR headof New York times.com Chief digitalofficer for NBC News and Global head ofnews at Twitter among other thingsplease join me in welcoming VivianSchiller to the[Applause]stage thank you so much Karen I rememberall too well um our meeting in youroffice it wasn’t even really that longago it was in the fall and it was justthis instant spark and we’re like yes wehave to do this and wow here we arealready and anyway I just on behalf ofthe Aspen Institute and Aspen digital wejust are so honored to partner um withsea on this important conversation it’sGatherings like this um today thatremind us that really a key measure ofany democracy is its capacity to absorband adapt to change in the short life ofthis country which is actually I wasreminded younger than the life of thisUniversity um people in good faith havecome together to meet moments oftremendous disruption we are capable ofit um we are capable of marshalling ourideas to mitigate the bad andparticularly to harness the good thatreforms into reforms that reflect andpreserve Democratic Values it’simportant for us to remember we arecapable of doing this particularly as wetoday draw on the example of globalelections that have happened to date inan effort to learn about the AI drivenchallenges uh ahead for this country’selections here in the US in this way weare really engaged in a quintessentialAmerican tradition forming a democraticresponse to emerging Technologies so AIadvances it’s really important to say atthe GetGo have tremendous tremendouspotential for good and we also know thatthose who would seek to disrupt free andfair elections because they have triedbefore are eager to enlist a Tools intheir ongoing effort to undermine trustin Democraticinstitutions to pollute our informationenvironment and to distract ordiscouragevoters this is true ahead of thisNovember’s elections and it willundoubtedly remain true for theforeseeable future so language tools forexample can be co-opted to misleadminority communities audio tools canspoof the voice of a candidate or anelection official to demobilize ordisincentivize voters fake images orvideos can be used to deceive the publicat particularly criticalmoments and to me what’s even morealarming is even if voters are notfooled we now have to prepare tonavigate a world where it’s easier tocompletely dismiss evidence-basedreality we could be living in a worldwhere everyday people will simply stopBel beleving anything they see anythingthey hear and to me honestly that’s moreterrifying than anything else I canthink of it is part also of theautocratsPlaybook uh so this is what we’re upagainst um in the words of our nextspeaker it will take a village in thiscase to build social resilience thatresists the poll to become a suspiciousand stubborn people we will have toinsist that the truth is knowable andworthknowing and we need to learn to trustagain institutions information and eachother the Dem democracy practitionersand experts here today are helping toDefine what democratic resilience willmean in the face of the of thesechallenges and by the way I shouldmention in addition to our speakersthere’s so many folks I know in the roomhere who are working hard at theseefforts even if they’re not on stage andwe thank you for being here um I amconfident that we have what is it takesthe all of us the we this country tomeet this moment and to secure ademocratic future in the age of AI assurely wemust I am now pleased to introduce ourfirst panel uh our first panel is calledsetting the stage risks to the 2024Global elections and with us I am sohonored to introduce secretary Hillaryrodom Clinton she is Professor ofinternational and public affairs columiaUniversity’s uh of for ColumbiaUniversity the 67th Secretary of Stateand former senator from New York igpfaculty adviser and board chair a VARarova the vice president for values andtransparency with the Europeancommission hello Madame vice presidentMaria ressa my friend of many decadesNobel priest prize winning journalist uhco-founder CEO and president of rler igpCarnegie distinguished fellowand moderating is the spectacularjournalist Jillian tet columnist andpart of the editorial board for thefinancial times I welcome you to the[Applause]stage well thank you very much indeedVivian and the dean for that wonderfulum introduction which sets out theissues I should say that I’m personallyabsolutely thrilled and honored to bemoderator this panel because first ofall I am a journalist who cares deeplyabout the truth at a time where it’sunder threat um I’m an American citizenwho’s deeply worried about the electionthat’s coming up um and also when I’mnot a journalist I’m attached toCambridge University King’s College asan anthropologist trained in digitalethnography and passionately believethat the only way to handle AIresponsibly is to add a second AI whichis anthropology intelligence tounderstand the social impactum and this is what this afternoon isgoing to be allabout and I’d like to start perhaps withyou secretary Clinton and ask you um Ithink we first talked many years agoabout the terrible threat ofmisinformation um I happened to have abackground in Soviet studies and hadseen some of the absolutely nuttymisinformation that was going around inthe Russian media many years ago wellbefore 2016 that were betrayed you assome kind of She Devil out of IndianaJones stalking the world um which wasyou know incrediblycorrosive and since then it’s got worseand worse andworse how alarmed are you about theupcomingelections and is there anything that wecan actually do to stop the tsunami ofmisinformation well Jillian thanks somuch for being here with us and formoderating it and I know you cover theseissues and and have a lot of uh you knowexperience in trying to make sense ofthem and I I think that um anybody who’snot worried is not paying attention umthere’s more than enough reason to beworried about what we’ve already seen uhbut certainly I think um as we’re heretoday uh doing this panel and havingthese other experts and practitionersspeak to us uh there are literallypeople planning how to interruptinterfere with distort elections notjust in the United States but around theworld and so if I could just focus onthe United States for a minute um whatJillian’s referring to I I think wasreally um motivated uh by my time asSecretary of State doing the job that Iwas asked to do by President Obama uhrepresenting our values our interestsour security uh around the world and inthe fall of 2011 after Putin announcedthat he would be coming back aspresident um they had elections for uhuh the Duma and they were so blatantlyuh fraudulent I mean videos of peoplethrowing away ballots and peoplestuffing ballot boxes this was not madeup this was you know very clear uhundeniable uhDistortion and interference so asSecretary of State I said that you knowthe Russian people deserve better theydeserved free and fair elections wheretheir votes uh would be cast and counteduhappropriately and um having literallynothing to do with me uh when the newscame out about what had happened inthose elections tens of thousands ofRussians particularly in Moscow StPetersburg a few other places went outinto the streets to protest to protestfor um their right to actually choosetheir leaders and it totally freaked outPutin and he actually blamed me publiclyfor the reaction in Russia and that wasthe beginning of his efforts toundermine and take me down in you knowvery real time starting before the 2016election um but certainly uh picking upa lot of steam and uh impact uh duringthat election and it was such a a anunprecedented and really quitesurprising phenomenon I don’t think anyof us understood it I I did notunderstand it I can tell you my campaigndid not understand it um there you knowthe so-called dark web was filled withthese kinds of memes and stories andvideos of All Sorts um you knowportraying me in all kinds of uh youknow less and flattering ways and weknew that you know something was goingon but we didn’t understand the fullextent of the very um clever way inwhich it wasinsinuated uh into social media if ithad stayed on the dark web you knowwhatever you know maybe couple hundredthousand people would pay attention butthis jumped into how wecommunicate and the only thing I can sayabout it is well I can say two thingsabout it one it worked you know thereare people today who think I have doneall these terrible things because theysaw it on the internet and they saw iton the internet in their Facebook feedor some you know Twitter this orSnapchat Snapchat that they were youknow following thebreadcrumbs and what they did to me wasprimitive yeah and what we’re talkingabout now is the leap intechnology that we’re dealing with youknow they had all kinds of videos ofpeople looking like me but weren’t meand they had to keep whoever that womanwas with her back to the camera enoughso that they couldn’t actually you knowbe found out now they can just go aheadthey can take me and in fact they’reexperimenting I’ve had people you knowwho are students and experts in thistell me that you know they’re once againbecause they’ve got such a library ofstuff about me they’re using it topractice on and see how moresophisticated they can get so I amworriedbecause you knowhaving defamatory videos about you is nofun I can tell youthat um but having them in a way thatyou really can’t make the distinctionthat Vivian was talking about you haveno idea whether it’s true ornot that is of a totally different levelof threat so I I think uh you know we’resetting the stage in this panel andwe’ve got you know two people who reallyunderstand this deeply with our otherpanelists well thank you very muchindeed and I’m glad you went back tothat extraordinary story since um post2011 because I think most people don’tknow about it I like most people when Ifirst saw those images very early onbecause I do speak Russian um laughed Ithought it was so ridiculous and boy wasI wrong to laugh it’s extraordinary howthis has spread and how much more Brentit is today um and I’d like to turn toyou commissioner and ask you um when youlook at the problem you’ve just had anelection in Slovakia where as we justheard earlier there was AI used tomanipulate the vote seeminglysuccessfully Europe has a lot ofelections coming down the track thisyear the European commission has taken amuch more aggressive stance thanWashington in trying to stand up to BigTech and control maybe not aggressiveenough but control the I can see yourface you can tell us if you agree or notbut at least you’ve tried to challengebig Tech in some aspects of theirresponsibilities How concerned are youthat this year’s election in Europe willbe undermined by AI yeah thank you verymuch and thank you for inviting mebecause this is really an honor to be insuch Fel uh well how how worried I am Idon’t have any right to be worried Ihave to act because I am the Europeanregulator and I don’t know whether wewere aggressive but what we did wasmaybe not aggressive but necessarybecause we already have good data goodanalysis showing that there were most ofthe elections in the EU member statesaffected by at least Russian propagandaand Russian hidden manipulation thesedays yesterday the the Czech and polishuh cigarette Services discloseduh the the data and and the facts aboutthe uh the the Russian uh propaganda anduh disinformation affecting uh severalelections through the domestic partiesyeah they need Putin doesn’t cannnot doit directly from his mouth to the earsof it’sme he cannot do it no wor that’s notPutinyou never know um he cannot do itdirectly from him his mouth to the earsof of uh European people but um he needssimply the Allies in in Our memberstates uh we had to take measures and weare taking measures before Europeanelections which you ask about for tworeasons we do not want uh Mr Putin tostart winning elections in the EUbecause the purpose of his propaganda isclear it’s a message stop the support ofUkraine and he knows that we are alldemocracy so he has to do it through ourpeople yeah so the purpose is isabsolutely clear uh so we take measuresuh agreement with the platforms to reremove deep fakes in the campaign so theAI should be very much limited or atleast to label it uh we have measures uhinvolving the Civil Societyfor enhanced factchecking uh we have uhcalls on on the independent and PublicService Media to take care of the factsbecause uh what we speak about is theprotection of evidencebasedTruth and Madame Clinton it’sinteresting howuh we politicians try to avoid the wordtruth because we will be immediatelyaccused of uh having our subjectivetruth doso when I speak about the truth I speakabout evidencebased truths we speakabout the facts and we uh we reallybelieve that our set of measures mightshould have real impact on the campaignsthat they will be fair that they will betransparent that they will be free of uhhidden manipulation byAi and maybe last comment why I amdealing with that I am commissioner forvalues can you imagine the shock at thebeginning when I got from Rola ferionthis portfolio values what does she meanby that and uh so it’s protection ofrule of law democracy and fundamentalrights and I would add also theprotection of the evidencebasedtruth because the destiny of the societywhich stops valuing the truth is to livein lifee and this is what we don’t wantin Europe absolutely well that’sparticularly um you know pertinent andpotent for anyone emerging from eastEurope East European countries I’m justcurious before I turn to Maria have youbeen the victim of misinformationyourself oh my God I can see any attacksevery time we’re on a feed um I see theattacks on both these women by the wayyou know our our attackers just combinedso yes you’ve been under incredibleattack yes I am I am under attack manyyears that’s why I also canceled theFacebook accountand and at at the end of the year Imight get out of politics totally sobelieve me I will canceleverything I am now on Twitter and inInstagram yes uh I am under permanentattack I was uh one out of two womenmainlymostly attacked in CIA and the other onewas uh was AngelaMerkel uh as for AI I have no complaintbecause there there was just one case ofme having Lara croft’sbody I likedit so but maybe we should not joke makejokes about that because we see a lot ofreally harmful things against the girlsand women yeah say and and that’s why wealso adopted recently the directive onviolence against women Which con whichSTS the chapter on digital violence it’sI think the first ever uh in in theDemocratic State legislation that we aredefining that and that also in AI act wewe Define these kind of practices as asuh something which has to be uh punishedbecause we also have to see crime andpunishment in practice absolutely wellas someone who’s worked in the MuslimWorld a lot and one of the things thathorrifies me is how female activists arebeing silenced by the use of AI tocreate pornographic images that are soshameful that it makes it extremely hardfor female activists to continue in thatculture it’s absolutely horrific itreally is um Maria have you beenattacked oh mygosh well first of all it’s nothingcompared to what both these women havehad and you know I I think for ourAmerican audience just VI gerova is notonly handling her values but she alsohas the portfolio of Margaret vtigerwhich means she is the most powerfulwoman regulating Tech right now rightthat’s and that’s part of the reason Iseriously the last time I was on thisstage with Hillary I was attacked by herattackers and every time I’m on stagewith Vera we also get attacked in spiteof that we love each other we do bottomline is we’re all going to get attackedthe minute we get off stage so there wego and and I think the hard part is youdon’t know what it’s like until you areattacked and that’s part of the reason Iwould like some of the men from SiliconValley to actually trade places with usfor a day or so so have I been attackedyes it is a Prelude bottom up you say alie a million times it becomes a factfor me it was 908 messages per hour andthen a year later uh the same thing thatwas seated online became cases that werefiled by my government against me umvery slowly you know the 21investigations became 11 criminalcharges became only two left after 7years right so we’ fought it but I thinkthe real impact of this and and you’vetalked about it but Russia is really thePioneer and and the eu’s elections themajor democracies around the world arehaving elections this year where the EUgoes where America goes it’s reallyscary for the rest of us in the globSouth cuz you’re not even acknowledgedyou’re beingmanipulated if you’re a woman genderdisinformation is using Free Speech I.Einformation Warfare to pound you tosilence if you are in a position ofpower if you’re a journalist if you’re ahuman rights activist um if you’re astudent who stands up for you know thiswhole thing of woke like we kind of jumpinto it but there are informationoperations that seed a lot of this sothis is this is the world of non theworld of Lies let’s not even call itit’s a world of Lies it’s a world ofpersonalpersonalization which is and I see somany faces here from because you’regoing to hear from David agranovich Isee Katie harbath who was also in thePhilippines um so please ask thequestions but more than anything you canbe attacked and it’s not just aboutbeing attacked it’s the fact that wehave lost agency we live in differentrealities right personalization whenyou’re talking about buying sneakers isyou know okay fine you you’re going toget recommended sneakers CU you lookedfor sneakers that was a long time agonow personalization means that I willgive you your reality I will give youyour reality but even though we’re inthe same shared space we have 100 plusrealitiesthat’s called an insane asylum that isthe world we live in today absolutelywell it’s no accident that there’s fourwomen sitting on this panel right nowbecause it really is a strong genderissue and thank you Maria for pointingout that not withstanding theinward-looking nature of a lot ofAmerican and European politics todayit’s not just a western issue in manyways it’s actually harder to tackle inthe emerging world right now um which isyou know very alarming but we’re goingto hear a lot later on about what can bedone to counterthis would you like to share anythoughts Maria do you have thoughtsabout what you’d like to see to fightback I mean for Americans get rid ofsection 230 because the biggest problemwe have is that there is impunity rightstop the impunity uh tech companies willsay they will self-regulateself-regulation comes from newsorganizations when we were in charge ofgatekeeping the public sphere but wewere not only just self-regulating therewere legal uh boundaries if we lie youfile a suit right now there’s absoluteimpunity and America hasn’t passedanything I joke that the EU won the raceof the turtles and filing legislationthat will help us it’s too slow for thefast lightning pace of tech and thepeople who pay the price are us us thisyoung generation I was just with VicMurthy and you know the Surgeon Generalof the United States didn’t file hisreport until May lastyear Hillary was probably Ground Zerofor all of the experimentation what kindof different world would we live in ifshe had become president I mean shewon’t say that but I will right like Ithink many of you in the room wouldthink that um secretary Clinton wouldyou agree that the first step is toabolish section 230 certainly is amongthe first steps you know I think it it’svery difficult um to uh be as um upsetwith the tech companies as we are and Ithink rightly so since they were grantedthis impunity and they were granted theimpunity for a very good reason back inthe late 90s which is we didn’t knowwhat was going to happen we had no ideawere they a platform kind of like autility which sent content through itand therefore you know uh they didn’thave the kind of liability and you wouldgo underneath to see where the contentcame from were they content creators didthey have a duty uh either to warn orprevent I mean nobody knew anythingbecause nobody had a real sense of whatwas happening well now we do yeah andshame on us that we are still sittingaround talking about it section 230 hasto go we need a different system uhunder which tech companies and mostlytalking obviously about the social mediaplatforms operate and I for one thinkthey will continue to make an enormousamount of money if they change theiralgorithms to prevent the kind of harmthat is caused by sending people to thelowest common denominator every timethey log on you’ve got to stop thisreward for this kind ofnegative uh virulentuh content which affects us across theboard but I will say it is particularlyfocused on women the empowerment ofmisogynyonline has really caused so much uh fearand led to some violence against womenwho are willing to take a stand nomatter who they are are they inentertainment are they academics arethey in politics or journalism whereverthey are and the kind of ganging upeffect that comes from online It couldonly be you know a very a small handfulof people in St Petersburg or malova orwherever they are right now who arelighting the fire but because of thealgorithms everybody gets burned and wehave got to figure out how to remove theimpunity impunity come up with the rightform of liability and do what we can totry to change the algorithms and thefinal thing I would say is we also needto pass some laws that understand thatthis is the new assault on uh freespeech you know in our country peopleyell Free Speech they have no idea whatthey’re talking about half the time yesand they yell at to stop an argument tostop a debate to prevent legislationfrom passing we need a much Clear ideaof what it is we are uh asking uhgovernments to do businesses to do uh inthe name of Do no harm and Free Speechhas always hadlimitations always been subject to uhlegislative action and judicialoversight and we need to get back intothat Arena right commissioner I can seeyou frankly scribbling notes tell us howif you were you are officially theleading regulatory Turtle what would youdo well I remember last year when I wasin doubleI said similar things as you MadameClinton about uh maybe the United Stateswill also have to move towards uh uhless impunity or no impunity online youcannot imagine I can imagine youcan what I received from Republicans uhI was afraid that I willbe somehow wanted here uh as as somebodywho who is committing horrible crime butuh maybe maybe for for the EU uh uh itis easier uh tolegislate uh the digital space becauselook at the situation uh we while theUnited States have to make a big bigjump we were a kind of ready for forthat because count with me illegalcontent hatespeech child pornograph Ry terrorismviolentextremism uh racism xenophobiaanti-Semitism we have all these thingsin our criminal laws for decades this isnothing new so when when we started tothink about how to legislate the digitalspace we in fact said what is illegaloffline has to be handled as illegalonline so we didn’t create any kind ofnew crime it was just pushing theexisting law to the digital space sothat’s why for us this this era ofadaptation was maybe easier than than inus where you really have to do biggerbigger jump and if you if you let me saytwo more things uh impunity is wrong uhCrime and Punishment missing in in thedigital sphere is is another crime Ihave to say and and uh we have to alsoadapt as the society I would like tostill be alive when I will see strongrejection from the society that this isnot acceptable we don’t like it if inthat system we are confronted with hatespeech and and dirty content we willsimply move somewhere else so also forthe digital digital companies uh it willbe a strong signal that they should notlet their business damaged because theyneed you ERS yes so this Societyreaction is still missing I think thatit will it will take some some moreyears last comment on violence againstwomen uh we see women disappearing frompublic space and here we speak aboutpoliticians and journalists mainly andwe we had a conference Maria was inBrussels last months and uh one shockinguh thing came out thatwhen here I speak about the politicianswhen the political parties want to winelections they are attracting women tocome because they are well products goodproducts to sell yeah sorry we speakabout women yeah in campaigns but thenwhen the women uh take the Temptationand become politicians the samepolitical parties are not honest enoughand courageous enough to defend themso I see cases of women who are horriblyattacked with horrible words likeSlovakian president yeah uh nobody isdefending her yeah so should we remainalone with that I think that thereshould be also some healthier reactionfrom the political parties and from thenews rooms as well well thank you wellsadly very sadly we are at a time youset the scenefantastically um I take away three keypoints one is that if women were runningthe world I think there would be quite adifferent tone and sense of urgency tothisdebate secondly these issues ofmisinformation are not entirely new Imean they go back a decade but they havedramatically accelerated in recent yearsand AI is threatening to make it worseand we have no time to lose because ofthe impending elections and thirdly wecannot duck the question of what ishappening with the tech companies andtheir responsibility if we want to moveforward to some kind of if not solutionthan containment we’ll be hearing fromtech companies later on today we’ll behearing from another other a number ofother voices about this vital debate butin the meantime can you all please showyour thank yous to them for a greatcomment that’s not itthank[Music][Music]you[Music]please welcome back to the stagesecretary Hillary rodmClinton Hi how areyouGL well if you’re notdepresseduh we’ll get youthere uh I I could not be happier uh tohave these uh extraordinary panelistsfollow up on the uh setting of the stagebecause now we want to get a little bitdeeper and understand the implicationsfor the upcoming uh us elections and wehave four amazing uh panelists uhJoselyn Benson is the Secretary of Stateuh of Michigan and she’s been in the eyeof the storm uh since well before uh the2020 election uh by far but you knowsince then certainly one of the realleaders to try tounderstand um what was happening uhMichael cherof the former Secretary ofHomeland Security uh co-founder andexecutive chairman of cherof group andyou know Michael really has just a depthof experience about dealing withoriginally it was online radicalizationand extremism and now of course uh basedon his knowledge of that uh set ofthreats he understands uh you know we’vegot to under you know we’ve got to facewhat’s going to happen uh in theelections uh Dara Linden mom is thecommissioner of the Federal ElectionCommission of the United States and assuch uh you know she is part of thegroup that is trying uh to you know makesense of where money is being spent andwhat’s being done with it and the impactuh that it is having and uh Anna makanuis the vice president of global Affairsat open Ai and we really are thrilledthat she’s here with us because clearlyum open AI along with the othercompanies uh you know it’s is forgingnew ground and a lot of it is veryexciting and frankly Anna a lot of it’svery concerning so part of what we wantto do is help sort that out particularlyas it does uh possibly affect electionsum so Michael let me start with youbecause uh as I said you you really wereon the front lines um when you were atthe Department of Homeland Security umand leading efforts uh to understand andprevent the use of uh the the internetat that point to uh provide um outletsfor extremism and the radicalization ofpeople um you know and now I thinkthere’s legitimate concern about hostileforeign State actors not just Russiathere are others who are getting intothe game why not it it looked like itworked so you know join the crowd um butwe’re now worried that they will useartificial intelligence to interfere inour elections this year can you explainuh for not just our audience here butthe people who are watching uh the livestream um you know both thedownsides uh as to how AI can be used byour adversaries but also what can we doto protect uhourselves thank you and thank you againfor leading this secretary um so let mesay I mean I think in this day and agewe have to regard the internet andinformation as a domain of conflict uhactually if you go back historicallyeven a 100 years it’s always been truethat our adversaries have attempted touse propaganda and false information tomanipulate us but the tools they hadwere relatively primitive whatartificial intelligence has done is equipeople to have tools that can be muchmore effective with respect to theinformation domain uh we’ve talked alittle bit about deep fakes and theability to have simulated video audiothat looks real and unlike Photoshop orsome of the things some of us rememberfrom years ago uh this has gotten to thepoint that it’s very very difficult ifnot impossible for an ordinary humanbeing to to tell the difference but Iwould actually argue that artificialintelligence has capabilities and risksthat go beyond that what artificialintelligence allows an informationWarrior to do is to have very targetedmisinformation and at the same time andit’s not a contradiction to do that atscale meaning to do it to hundreds ofthousands maybe even millions of peoplewhat do I mean by that in the old daysagain 10 20 years ago if you sent out amessage that wasincendiary you affected and positivelymaybe induced belief by some people buta lot of other people would look at itand go oh this is terrible and it wouldrepel tell them so that was aninhibiting factor in terms of howextreme your public statements were butnow you can send a statement tailor toeach individual viewer or listener thatappeals only to them and nobody else isgoing to see it moreover you may send ituh under the identity of someone who isknown and trusted by the recipient eventhough that is also false so you havethe ability to really uh send a curatedmessage that will not influence othersin a negative way and the reason I sayit’s at scale you can do it millions oftimes because that’s what artificialintelligence does so I think that hascreated a much more effective weapon forinformation Warfare now in the contextof the election in particular what arewe I think worried about well I thinkobviously one experience we had we sawthis in 2016 with the uh Russiansassisting the Trump camp campaign isthere can be an effort to skew the votesto a particular candidate or against thecandidate and we’ve seen that now we sawthat with macron in France in 2017 we’veseen it in the EU uh and in other partsof the world but I would actually arguethat this year we’re facing somethingthat in my view is even more dangerousand that is will be an effort todiscredit the entire system of Electionsand democracy you know we had a um adefeated candidate who I won’t mentiontheir name who has talked about a riggedelection now imagine that for the peoplewho are an audience for that they willstart to see videos or audios that looklike they’re persuasively examples ofrigged elections now it’s like pouringgasoline on a fire and we could haveanother January 6th and I understandthat the reason our adversaries likethis is because more than anything elsethey want to undermine our of effort andour democracy and in a world in which wecan’t trust anything and we can’tbelieve in truth we can’t have ademocracy and that’s I think going tolead to a third consequence which willbe verydangerous we’re talking about how do youdistinguish and teach people todistinguish deep fakes from real thingsand the idea being we don’t want to havethem misled by the Deep fakes but Iworry about the reverse in a world inwhich people have been told about deepfakes do they say everything’s a deepfake therefore even real evidence of badbehavior has to be dismissed and thenthat really gives a license to autocratsand corrupt government leaders to dowhatever they want so how do we helpcounteract that well I mean there’s sometechnological tools for example there isnow an effort to do watermarking ofvideo and audio where genuine video oraudio when it’s created it has anencrypted Mark such that anybody wholooks at it can validate that it is realand it’s not fake um more than thatwe’ve got to teach people about criticalthinking andevaluation so they can cross check thatwhen you get a story that appears toStand Alone look to see what are theother stories is anybody else picking itup and we need to actually establishtrusted voices that are deliberatelyvery careful and very scientific aboutthe way they validate and test thingsand finally I think we’ve got to teacheven in the schools and this is going tostart with kids uh critical thinking andvalues what it is that we care about andwhy truth matters why uh honor matterswhy ethics matters and then to have thembring that into the way they read andlook at things that occur online this isnot going to be an easy task but I dothink we need to engage everybody inthis process not just people who areprofessionals and make it part of theMandate for civil society over the nextyear or two thank you so much Michaelthat was incredibly um helpful layingthe you know the groundwork for um whatwe need to be thinking about so um Darawhat what is the Federal ElectionCommission doing to try to set up someof those uh guard rails um on AI fuel uhdisinformation ahead of the 2024elections well thank you for having mefirst of all it’s um an honor to be apart of this really important uhdiscussion so to your question the shortanswer is that the FEC is fairly Limitedin what it can do in this space um butthere is hope on the horizon and thereare different ways that things aredeveloping so just to to lay theBaseline despite the name the FederalElection Commission really onlyregulates um The Campaign Finance lawsand federal elections so the money inMoney out um and transparency there umbut last year we received a petition forrule Mak asking us to essentiallyclarify that our fraudulentmisrepresentation uh regulations includeum artificial intelligence and reallydeep fakes um and we are in the petitionprocess right now to determine if weshould amend our regulations if we canamend our regulations um and is there arole for the FEC in these campaignFinance regulations in this space um ourlanguage is pretty clear and very narrowso even if we can regulate here it’sreally only candidate on candidate Baction so if one candidate um doessomething to another candidate that isall that we could possibly cover umbecause of our statutes um and that’sunless Congress um expands that but allis not lost and there are some prettygreat things that have come out of thisand one is what happened during ourpetition process uh we’ve receivedthousands of comments from the publicand from many other institutional actorsincluding a lot of the you know smallertech companies and organizations thatdon’t often have a seat at the table buthere it was a really an open forum forthem to bring their ideas to light umthese comments were insightful they werecreative and I it is my hope thatCongress and states and others lookingat this will read all of these commentsas they try try to come up with withpossible Creative Solutions here um andin addition Congress could expand our umlimitedjurisdiction uh if you asked me threefour years ago if there was any chanceCongress would regulate in the campaignspace and um really come to a bipartisanagreement I would have laughed but it’spretty incredible to watch thewidespread fear over what can happenhere um we had an oversight um hearingrecently where members on both sides ofthe aisle were expressing realconcern um and while I don’t thinkanything’s going to happen ahead ofNovember I see changes coming um andthere’s bipartisan discussion Senatorkashar is leading this Senator Warner umand they’re thinking about ways thatthey they can do something these arereally on the really on the the Deepfake space it’s not the themisinformation disinformation that’sunderneath it all but this discussion ofAI and how AI is so um at the Forefrontof everything that we’re discussing inthis country I think it has brought moreto light this misinformation anddisinformation and the ways that theinformation gets disseminated that it isbringing that discussion out so thingscould change I’m hopeful well I reallyappreciate you’re talking about that uhDara because a lot of people say wellwell who oversees elections who whotries to make sure that our elections uhyou know don’t go off the rails and wedon’t have a lot of these problemsand as you just heard it’s not theFederal Election Commission theirmandate is narrow and they try to youknow make sure people who arecontributing to elections have the rightto do so and candidates are spendingappropriately so much of the work aboutregulating elections is done at thestates in our country and uh we’re so uhfortunate to have Jocelyn here becauseas I said uh in introducing her shereally has been at the Forefront oftrying to figure out how to protect ourelections to make sure they haveintegrity and Michigan uh has recentlyum tried to regulate um artificialintelligence um and I want you to tellus about that El uh uh that legislationand any other actions that um you aretaking on behalf of your state and thatyou know other states are taking butmaybe just start Joselyn with just aquick introduction of what you’ve beenfacing you were elected in what 201 20182018 uh and you know if you rememberpictures of armed men storming thecapital because they didn’t like whatthe governor was doing about covid andMichigan was at the real Center of allof the you know crazy theories that wereput forward in 2020 about the electiongive us just a quick overview and thentell us about what your regulationintends to do and and what else needs tobe done yeah thank you and thank yousecretary Clinton for inviting me to bepart of this really importantconversation to me we cannot protect thesecurity of our elections if we don’ttake seriously the threat thatartificial intelligence poses to ourability as election officials to Simplyensure every vote is counted and everyvoice is heard and that citizens haveconfidence in their democracy and intheir voice and in their votes andthat’s our goal in Michigan and inseveral other states all around thecountry we uh are coming off of being inthe spotlight in 2020 rising to thatoccasion but also seeing very clearlyand living very clearly when people withguns showed up outside my home and uhDark Night in December and I’m insidewith my four-year-old son trying to keepus safe that’s real and they showed upthere just like they showed up at thecapital on January 6th because they’dbeen lied to and fed misinformation andnow we’re facing an election cycle wherethose lies will be turbocharged throughAi and we have to empower citizens tostand with us in not being fooled andpushing back on that misinformation andthose lies and there in lies both ouropportunity but also the real challengehow do we in a moment where theadversaries to democracy are focused onsewing seeds of Doubt creating confusionand chaos and fear in everything they doand now have this new emergingtechnology that is day by dayessentially getting stronger and moreyou could perhaps say effective at uhbeing poised to accomplishesaccomplishing those goals of creatingchaos confusion and fear in ourdemocracy and in our voters Minds how dowe respond to that at the state leveland throughout our country as Citizensby by giving each other certainty andconfidence that our democracy will standjust as it prevailed in 2020 and everytime before and since but also that wecan be equipped every single one of usto have clarity as to how to respondwhen we get this misinformation so inMichigan First first we set up the guardrails and several other states have donethis too and we do hope the federalgovernment joins us in in Banning thedeceptive use intentionally deceptiveuse of artificial intelligence toconfuse people about candidates theirpositions or how to vote or where tovote or anything regarding elections sowe’ve drawn A Line in the Sand it’s acrime to intentionally disseminatethrough the use of AI deceptiveinformation about our elections secondlywe’ve required the disclaimers anddisclosure of any type of informationgenerated by artificial intelligencethat’s focused on elections so forexample one of the things we’re worriedabout is and we know because of AI itcould be targeted to a citizen on theirphone getting a text saying here’s theaddress of your polling place onElection Day don’t go there becausethere’s been a shooting and stay tunedfor more information that’s going toinvoke fear again goal is fear rightit’s invoke fear in a citizen with thedisclaimer and disclosure in placerequirement that has to be disclosedthis has been generated by artificialintelligence it’s still not sufficientbut it is a key piece of enabling us topush back but the other side of that iswe need to equip that citizen when theyreceive that text to be fully aware as acritical consumer of information as towhat to do where to go how to validateit where are The Trusted voices so inaddition to passing these laws we aresetting up voter confidence councilsbuilding out these trusted voices sothat Faith leaders Business Leaderslabor leaders Community leaders Sportsleaders education leaders can be poisedeven Mayors and local election officialsto be aware and push back with trustedinformation and so it’s layers of uponlayers of both legal protections and uhPartnerships to equip our citizens withthe tools they need to be criticalconsumers of information and then ineverything we do between now and inevery election but certainly leading upto November uh helping to communicate toevery room we’re in that it’s on all ofus to protect each other from the threatof AI in regards to our elections and inmany other spaces as well and uh whilewe as officials will be working to dothat we’re also trying to communicate toCitizens this is a moment that’s goingto Define our country for years to comeand we all have a responsibility in thismoment to making sure we’re not fooledour neighbors aren’t fooled ourcolleagues friends aren fold andequipping all of us with the tools weneed to push back and speak the truthvalue that honor and integrity and helpDefine our country moving forward basedand rooted in thosevalues well I am a huge fan of what youand um your attorney general and yourGovernor have uh been doing and uh Ithink um it would be great if uh youcould get some help to model this andI’m hoping maybe some tech company orsome Foundation we’ll talk to youafterwards because we need to show thiscan work I saw Michael nodding his headI mean we’ve got to get you know if ifif this is a fight againstdisinformation we have to try to put upguard rails but we also have to floodthe Zone with the right information uhto counter uh the negativity uh that isout there so um I hope you can Implementthat and we can then all learn from ituh because uh it’s going to be uh not aproblem that goes away after thiselection so Anna you’ve been sittinghere um you’ve been sitting through thefirst uh panel um now you’ve heard ourother panelists and you are you knowtruly at the at the center of thisbecause uh you know at chat GPT you allare moving faster than anybody can evenimagine sometimes I think probablyyourselves about what it is you’recreating and and the impact that it willhave and this is obviously the groundzero year this is the year um of thebiggest uh elections around the world uhsince the rise of AI Technologies umlike chat G uh GPT and so can I ask youum do you agree what you’ve heard fromthe panelists about the dangers but thentell us what um you’re doing at chat Guh at open AI uh to try to helpSafeguard elections you know give usyour assessment are we overstating itare we stating it um and what can bedone and how can you help us do it so Ithink what’s been really interesting tome listening to your first panel and toum my co-panelists here is that so manyof the ideas um and the concerns we arealready integrating into technology soif I could just say that the one pieceof good news is that unlike previouselections none of us are coming into theon in terms of uh the the tech companieselection officials even the public andthe Press were not coming into thisunprepared um you know this isespecially true for me because I wasactually working at the White House onthe Russia portfolio in 2016 so this hasbeen top of mind for me from day one inthe job but open AI you know arelatively young company this issomething that’s been top of mind for usfor years in fact gpt2 which was severalyears ago and you know quite you knowembarrassing compared to what exists nowat the time it was state-of-the-art itcould produce paragraphs um that weretext like a human could write and eventhen we thought oh wo like thepossibility for this to be used tointerfere with democracies and electoralprocess is very significant and so wemade a decision then not to open sourceit and it was quite controversial in theresearch Community but it was because wehave this in mind so you know that isthat is uh you know ahead of 2016 wewere not having panels like this and soI think in general we are much moreprepared as a society and we are workingtogether you know open is working withthe National Association of thesecretaries of State um and with socialmedia companies because one key thing toremember is that this there is a realdistinction so we are not dealing withthe same kind of um issues at um AIcompanies we are responsible forgenerating or what we do is generate AIcontent rather than distribute it but weneed to be working across that chain andbut in terms of you know of course I asmany have mentioned here and as I hearalmost with in every interaction withpolicy makers deep fakes are a veryserious concern and so for us we haveDolly which is an image generator andwe’ve just ban you know we do not allowto generate images of real people periodbut in particular politicians and now weare implementing something called ctpawhich is a digital signature and thegreat thing about c2p is that it’s notjust AI companies you know this is theNew York Times and Nikon and uh you knowthe BBC so it’s going to be an ecosystemwhere there’s actually a tool across theecosystem that’s going to work to helpjournalists and social media companiesidentify if a piece of content isgenerated by AI obviously this is notyou know a complete solution but this isnot you know this was not the case ayear ago so already we are much moreadvanced in our ability as you know theentire ecosystem to deal with theseissues but we also have you know threatinvestigators we recently just took downa bunch of State actors who were usingour tools so um um we and so these twopieces of cooperation across um all ofthe players and all of thestate-of-the-art um interventions thatwe are building uh I think mean that foryou know right now the kind of thingthat you described seary CH of is notpossible with openi tools we you cannotconnect them to a a chat bot to sort ofspew information and Target it at atvoters but you know we’re we’reconstantly evaluating what other kindsof threats does this technology umcreate that are notand I would just kind of wrap up withthere is of course I do have optimismotherwise I wouldn’t be working at openAI one of the things that these tools umhave the potential to do is createaccess to education for new segments ofsociety and so you know there’s apotential these tools will actually helpcreate uh citizenry that is moreeducated and more aware which I think isa really key aspect to a healthydemocracy and it can be used you knowfor secretaries of state that areincredibly busy in back offices and itis a bit of a of a race between thepositive applications of theseTechnologies and negative ones so I amum you know this is why it’s sofantastic that for example the the EO byuh President Biden really works tostrike that balance well we have only afew minutes left but I just want to umask each of the panelists um you knowwhat steps cangovernments obviously national statelocal and our in our country the privatesector companies particularly the techcompanies the AI companies the platformsuh nonprofits any anyone uh that youthink of uh as to what they could andshould do to try to number one ensurethe Integrity of this upcoming electionbut then for the longer term what arethe changes we need Anna can you startwith that I think it goes back to what Ialready mentioned which is that reallyclosecollaboration with um a AIcompanies social media companieselection officials Civil Society reallyworking together to address this problemand sharing best practices and sharingknowledge because if this is a whole ofsociety problem and no single actor isgoing to be able to be fully effectiveand solving it g i i I think theeducation component and and pushing tofind trusted sources is key thetechnolog is going to change there’sgoing to be new technology in the FutureNo Matter What the government does orwhat the tech companies do we need tostrengthen the trust that people canbuild in in trusted you know forms of ofum news and I think we’re seeing some ofsome of that starting to change Michaelum I would say in addition to those umuh suggestions information sharing whenthere is an indication that something iscoming that’s part of a wave ofdisinformation to share informationamong all the stakeholders includingfederal and state and the public is veryvery important now I want to say I knowthat there’s some litigation now wheresome states have tried to make itillegal for the government to shareinformation about disinformation withthe platforms because they argue thatthat’s censorship I personally thinkthat’s nonsense um I think what you’redoing is giving information that’shelpful and not doing anything that’sharmful yeah I agree I think andparticularly for uh for Philphilanthropy and Foundations to reallyinvest in entities and Partnerships thatare focusing on this education andsharing of information and building morecollaborative Partnerships and teamworkaround this all of that I think has toum be be the foundation for every entitymaking their first priority protectingcitizens from the ways in which AI canbe negatively used to um harm their ownvoice and their votes in a democracy andto recognize that uh that ouradversaries to democracy have figuredout uh how to divide us and uhdemobilize us and deter us frombelieving in our Voice through the useof AI and so our response needs to besimilarly collaborative National in inscope uh and uh focused on empoweringcitizens and partners all across uhevery Arena and sector Tech and Beyondto be a part of the push back and theprotection of our citizenry from thisthreat to ourdemocracy well I can’t thank uh the fourof you enough and maybe out of thispanel will come that kind of cooperationlet’s try it out let’s see let’s get youknow open AI Facebook others together uhwith people like Joselyn and Michael whohave you know a lot of depth and and youknow what Dara knows from you know thewhere the money flows are uh that shesees and let’s see if there can’t besome Cooperative effort between uh nowand this election if we don’t try weknow what’s going to happen we knowwhat’s going to happen and uh I thinkthat uh Michael you made a great pointwe need more transparency and opennessuh you know and that should bedeclassifying information as quickly aspossible so it gets out in the publicand frankly governments need to get itout and not ask for permission becauseit can influence the the conversationgoing forward but I think this idea ofcollaboration it’s always better in ademocracy to have collaboration bringpeople together so uh let’s see what wecan do to follow up thank you all verymuchthere will now be a short break in theprogram we will resume at 2:50 p.m.thankyoueethanhi everybody um boy those were twofabulous um panels I I hope you agreeand of course there were many mentionsof those darn tech companies so now wehave the tech companies up here and boyare we going to grill them sorry guysthese guys are terrible Iknow no no we will actually what I wantto start well let me first of all let meintroduce our uh let me introduce ourpanelists um we have um at the end uhDavid aganovich who is the director ofglobal threat disruption at meta Yasmingreen who is the CEO of jigsaw jigsaw isa unit of Google that addresses threatsto open society and um Clint Watts wholeads Microsoft’s threat analysis Centeruh which is uh part of customer safetyand Trust so where I what where I wantto begin actually with each of youbecause all three of you in in slightlydifferent ways are focused on looking atthe threats and the risks that you’reseeing um across your platforms oracross Society I think maybe even moreso in your case uh Yasmin so I want tobegin by uh you sharing with us whatyou’re seeing and again particularlywith relationship to the use of AI whenit comes to um information um deceptionor other forms of AI deception andyasine I’m going to begin with you jigawis a little bit of a different a adifferent animal here because you reallyare looking at um societal changes andwhat kind of interventions you can makenot evennecessarily via your platforms to uh toaffect changes so give us a little bitof a a a a a a sense of what you’reseeing okay well I I wanted helloeveryone uh I wanted to actually pick Ithink the the panels before did such agood job of surveying the landscape umincluding the threat so I wanted to umget a bit specific and build on what wassaidum so there was we talked about trust inthe last panel um and one of ourobservations about the trust landscapeis not that we are in a post trust erabecause as humans we have trustteristics we have to make decisions wehave to evaluate things um but it’s notthat trust has evaporated it’s that it’smigrated so trust is is much lessinstitutional and much more social umand I think that’s really important aswe think about the risk exposed bygenerative AI so we did an ethnographicstudy um with Gen z uh to figure out howyoung people are going about you knowwhat trust hortic they have online andhow they go about evaluating informationso I want to just do a survey of thisroom and I think we have a goodgenerational mix here but um just by byshow of hands um how many people hereread the comments underneath newsarticles H is say that’s about maybehalf 2/3 um I got to tell you I don’tread the comments I thought like ourCollective coping mechanism for theinternet is that we don’t read thecomments okay there some nods okay sowell I’ll tell you who who reads thecomments jenzie Maria audience for umand and the interesting thing is notthat they read them as much as as um aswhen and why when do they read thecomments they often goheadline well I’ve got an underst studyhere ifwhich I appreciate and I love you no oneelse I’d rather speak the words from mymouth than mariaa but uhheadline comments and then thearticle why would they be doing it inthat order because and this is accordingto them in this research that we did umthey want to know if the article is fakenews so you see the inversion there likeI would look at the article as thejournalists being you know theauthoritative uh uh curators ofinformation they are interviewingexperts who are authorities and um jenzand I think increasingly we are going tothe social spaces to um look for Signalum and we kind of threw out the terminformation literacy and we we lookedInstead at like information sensibilitythey’re looking for social signals abouthow to situate the the kind of theinformation the claims and the relevanceto them so it’s like you know uh we wefamously had the term alternative factsthis is alternative fact checking youknow and we should be really concernedand it’s relevant uh to generative AIbecause one of the things that we maybeum emphasize less than we should becauseit’s a threat that’s coming around thecorner in addition to synthetic contentwe have synthetic accounts we haveaccounts that are going to be we talkedabout this earlier but but you knowthese human presenting chat Bots and andwhat we’re seeing like one of theproducts that uh we offer jigsa is um isthe most popular free tool formoderating comment spaces so we havehave billions of comments every day thatgo through our thousand partners and wehear about synthetic accounts that arethere andposting they’re not selling you cryptothey’re not spreading disinformationthey are active what are they doing theyare building a history of humanlikebehavior because in the future yes it’sgoing to be really important for us toevaluate wherever we can to do detectionto evaluate whether something’s a deepfake but when there’s a deep fake wheredo you think people going to go to checkthey’re going to go to other people inthe social spaces for Signal um so weneed to invest in humans and and alsoinvest in in ensuring that the humanpresenting chatbots um are not do nothave an equal share of influence therefascinating so SC syntheticidentities uh not just synthetic uhcontent yeahfascinating Clint so you I’ve known youfor a number of years you’ve been nowwith Microsoft for what two almost twoyears years but you have been sort ofdoing this kind of deep digitalforensics for quite a long time soyou’ve seen that sort of trajectory ofHistory how we’ve seen seen seen thingsseen things evolve since you know 2016prior to 2016 until now so give us alittle sense of what what you whatchange you have seen particularly sincegenerative AI has sort of taken off inthe in in the wake of the launch of uhchat GPT and what risk r s you’re seeingtoday yeah so it it’s interesting interms of timing it was 10 years and twomonths ago that we encountered our firstRussian account and impersonating anAmerican that would later go uh afterthe election I’m I’m sorry we we weretrying you know and we were working fromour house and we used a tool calledMicrosoft Excel which is incredible ifyou’ve ever checked it out uh now we useMicrosoft so that’s a major change in 10years um and and what’s interesting isin 2014 15 16 it was testing it on theRussian population first Ukraine SyriaLibya it was battlefields and then itwas taking it on the road to all theEuropean elections and the US electionand so watching what his transpired inthat there’s often a little bit of amisunderstanding about how much thingshave changed in 10 years in terms ofsocial media uh speaking of JenZ Jen Z Would you read more than 200wordsI bet you would watch 200 videos sothat’s one of the biggest changes in 10years with the technology and that’s notjust about gen Z that’s about mygeneration everybody older video is Kingtoday and if you’re trying to influenceby writing a hot 9000w blog you’reyou’re running uphill like with a with alot of weight on your back so you knowour monitoring list in 2016 were uhTwitter or Facebook accounts linking toblog spot in 2020 it was Twitter orFacebook a few other platforms butmostly linking to YouTube and today ifyou go to it it’s going to be all videoany monitoring list of any threat actorso my team tracks Russia Iran China uhworldwide we’ve got 30 on the team we do15 languages amongst the analyst andwe’re mostly based here in New York andnine months ago we did a dedicated focuson what are the a what is the AI usageby these threat actors and so we havesome results of our research so far andbut I would say in 2024 um there will befakes some will be deep most will beshallow and the simplest manipulationswill travel the furthest on the internetso in the just the last few months themost effective technique that’s beenused by Russian actors has been postinga picture and putting a real newsorganization’s logo on that picture I’msure David he’ll be able to tell youmore about this Distributing across theinternet that gets millions of shares orviews there have been several deep fakevideos in and around Russia Ukraine andsome elections and they haven’t gonevery far and they’re not great yet thiswill all change remember this is Marchso things are moving very quickly sowhat I would note is just looking at afew things there are five distinct sortof things to look at one is the settingis it public versus private in publicsettings and I would love David take onthis when you see a deep fake video goout crowds are pretty good Collective Lit’s saying nah n we’ve seen that videobefore I’ve seen that background hedidn’t say this she didn’t say thiswe’ve seen Putin zinsky deep fakes andthe crowd will throw the real video outand it kind of dies the place to worryis private settings when people areisolated they tend to believe thingsthey wouldn’t normallybelieve one anybody remember Co when wewere all at our house it was very easyto distribute all sorts of informationit was hard to know what was true orfalse or what to believe and people hadtotally different perceptions of thepandemic the second part is in terms ofthe AI the medium matters tremendouslyvideo is the hardest tomake text is the easiest text is hard toget people to pay attention to videopeople like to watch audio is the one weshould be worried about AI audio iseasier to create because your data setis smaller and you can make that on alot more people it takes a much smallerdata set and you can put it out andthere’s no contextual clues for theaudience to really evalue so when youwatch a deep fake video you go Hsomething I know how that person walksI’ve seen how they talk that’s not quitehow it is audio you’ll give it adiscount you’ll say yeah on the phonemaybe they do sound like that or ah it’skind of garble butmaybe that is where to look at we’veseen that uh in the Slovak electionswe’ve seen that with uh the robo callsaround President Biden uh Indonesiawe’ve seen these sort of examples therewas a deep fig video that used TomCruz’s I voice he’s probably the mostfaked person both video and audio aroundthe world that’s tougher to do and thatkind of comes to the other thing to lookfor is there’s a intense focus on fullysynthetic AI content the most effectivestuff is real a little bit of fake andthen real blending it in to change itjust a little bit that’s hard to factcheck it’s tough to like chase after sowhen you’re looking at it privatesettings and audio with a mix of realand fake that is that’s powerful Tool uhthat can be used a couple other thingssort of to think about is the contextand the timing many of you probably sawinformation was totally incorrect aboutuh the Baltimore Bridge tragedy thisweek right people immediately you knowrush to things and when you’re feared orthere’s something you’ve never seenbefore you tend to believe things thatyou wouldn’t normally believe so imagineit’s a super contentious event orthere’s some sort of an accident or atragedy AI employed in that way can bemuch more powerful tool to do that youhave to have Staffing you have to havepeople you have to know the technologyyou have to have compute and you have tohave capacity that’s not a guy in hisbasement on the other side of the riverfolks that is a well organizedorganization with technology that hasthe infrastructure to do that and isready to run on something instantly I.Ethe Russian disinformation system whichdoesn’t hire just about 10 or 20 peoplewe’re talking about thousands of peoplethat are working this Non-Stop andaround the clock and as we know in allof the governments around the worldthere are just thousands of peopleworking to counter disinformation day inand day out right we we stay till 2: inthe morning watching we’re just not setup the same way and so that gives them astrategic Advantage uh 10 years ago wewere tracking two activity sets ofRussia that ultimately went for 2016today my team tracks 70 activity setstied to Russia so that just tells you interms of the scale worldwide and the waythings are going um that’s something tolook for um the last thing to thinkabout is knowledge of the Target andsecretary chdo brought up a great pointis if people know the target wellthey’re better at at at deciding whethersomething is fake or not if you’ve seenit over and over again but if you don’tknow the target well or the context wellthey are not as good at it so there’salways the presidential candidatepresidential candidate there’ll be adeep fake and it will change the worldand make her heads explode probably notbut if it’s a person who’s working atelection Spot somewhere out in a stateand a deep fake is made or maybe they’renot even a real person it’s thesecontextual situations that we have to beprepared for in terms of response so ourteam is setting up I we work with Googleand and meta and I would just tell youas my experience being on the outside oftech and now being in uh 10 years agowhen I notified tech companies about theRussians going after the election theytold me I was an idiot and that no onewould believe that now I work at a techcompany and we do exchanges all the timeso I I would just like to point out Ifeel like we got great relationships uhyasmi and David we’ve worked togetherfor years you know on on differentprojects so I I think that’s somethingelse that’s quite a bit different todaythat that’s great thanks Clint and DavidI want you to sort of pick up wherewhere where Clint is leaving offobviously any additional context thatyou can provide in addition to what hesaid about what you’re seeing out therebut then I also want you to addresssomething pick up on something thatClinton mentioned which is it’s onething when it’s you know a big splashdeep fake that’s you know all overpublic forums so those can easily dedebunk and I agree with you the bigspectacular deep fake of one of themajor presidential candidates isunlikely to have a huge impact but thestuff that you can’t see because it’s onmessaging platforms that’s what weworried about so talk about what you’reseeing there absolutely and I thinkbuilding a bit on what Clint Clint hadmentioned around what we’re seeing fromthreat actors around the world so ourteams have taken down now probably alittle over 250 different influenceoperations around the world includingthose from Russia China Iran but also anumber of domestic campaigns fromcountries all over the world maybe thekey three things that we’re seeing inaddition to the trend at Clintonmentioned one these are increasinglycrossplatform cross internet operationsum the days of a network of fakeaccounts on Facebook and network of fakeaccounts on Twitter somewhat you knowclosed ecosystems are are gone right nowI think the largest number we’ve everseen is 300 different Platformsimplicated in a single operation fromRussia including local forum websitesthings like next door but like for yourneighborhood um as well as more than5,000 just web domains used by a singleRussian operation called doppelgangerthat we reported on last quarter so whatthat means is the responsibility forcountering these operations is alsosignificantly more diffuse rightplatform companies don’t just have aresponsibility to protect people ontheir platforms like the work that ourteams do but also to share informationum I think secretary chof mentioned thisin the last panel not just sharinginformation amongst the differentplatforms that are affected but withcivil society groups and with governmentorganizations that can take meaningfulaction in their own domains the secondbig Trend I think that we’ve generallybeen seeing is that these operations areincreasingly domestic and increasinglycommercialized so there are commercialactors who sell capabilities to do coordwhat we call coordinated inauthenticBehavior just information for hiresomething an no Maria’s organization haswritten a lot about in the Philippinesin the commercialization of these toolsdemocratizes access to sophisticatedcapabilities that used to be partbasically nation state capabilities andit conceals the people that pay for itthem it makes it a lot harder to to holdthe threat actor accountable by makingit harder for teams like ours or teamsin government to figure out who’s behindit and then the third piece is thatwe’re increasing so to the the use of AIis that much like Clint mentioned Ithink we’ve generally seen AI I wouldsay cheap fakes so shallow fakes or noteven AI enabled but just things likephotoshops or repurposed content fromother events mainly being used by thosesophisticated threat actors Russia ChinaIran but where we do see AI enabledthings like deep fakes or textgeneration being used as by scammers andspammers now that’s not to downplay thethreat scammers and spammers arearguably some of the most Innovativepeople in the online threat envir theymove the fastest they’re the least uhresponsive to external pressure becausethey just want to make money and theyoften are in jurisdictions that aren’tgoing to do anything about them what Iwhat we should all be alert to is thetactics and techniques that thosescammers and spammers use being adoptedby more sophisticated actors over timeso if you want to look to see what’scoming that’s where I would be lookingto see where things are coming now whatcan be done about it what’s working whatisn’t working um especially some of theexamples you used some of these AIenabled capabilities being used insmaller more private settings this iswhere things like some of thewatermarking and again by watermarkinghere I mean more what Anna makanji wastalking about so technical stenographicwatermarking that can’t be easilyremoved to identify whether content isauthentic or was created by an AI systemcan be perpetuated by social mediaplatforms right so if a company thatproduces AI content which meta is one ofthose um is willing to be part of thatCoalition make sure anything that ourmodels produce is discoverable as AIgenerated then when it shows up onTwitter or shows up on our own platformsor shows up on Snapchat it should carrythrough those standards and so there wascompact at Munich amongst many of thetech companies Microsoft was part ofthat as well Google is part of that umthe more we can raise the bar across theindustry um to require companies to bebuilding in these capabilities earlybefore we get to the point where the badthings have already happened um the morewe can actually build meaningfuldefenses um I I one thing from the lastpanel that really stuck with me was sowhen Anna was at the White House dealingwith Russia policy I was in the USgovernment on the security side alsodealing with Russia policy and we werechasing after the problem at that pointright it had left the station we have anopportunity now to start building thesesafeguards in as this technology istaking off so I’m happy we’re havingthis conversation now and I thankeveryone who pulled this togetherbecause it’s an incredibly timely timefor us to be building this thanks I wantto just um stick with you just for asecond David and talk a little bit aboutgo a little bit deeper on messagingplatforms so of course U meta uh ownsone of the most used significant largestuh private encrypted messaging Platformsin the world which is WhatsApp so muchof what we know is that is traveling uhthat could be these kind of syntheticmessages no matter what form factor theyare uh video or text or IMor audio travel through uh WhatsApp canyou how how do you think about uhensuring that those platforms do notbecome vectors for uh for this kind ofharmful synthetic content aroundelections and and what are you doingabout that and and also about the openuh parts of uh WhatsApp as wellabsolutely there’s some really excitingI think integration between some of thetechnical standards that we’ve talkedabout things like steganographicwatermarking that can beprogrammatically carried through onplatforms and in ensuring that robustand reliable encryption remains in placefor people all over the world so thattheir Communications can’t be spied onby governments or uh particularly inauthoritarian regimes so one of there’sI think two different tool sets here oneis ensuring that as platforms whetherit’s WhatsApp or signal or anyone elsewho’s building these point-to-pointcommunication tools that we’re buildingin tools for the people who use theplatform to identify and Reportproblematic content things scam andspams but also things likedisinformation and also that we’rebuilding in Technologies as the industryuptakes more of the safeguards around AIsystems that can be programmaticallypropagated in our own software withoutREI needing to break fundamentalencryption right so you can imagine afuture where we can get all of thesecompanies that produce AI images or AIgenerated text to sign up to markingstandards and if that content ends upbeing sent through one of our platformsthat the watermark can be carriedthrough without having to have someonein the middle saying oh that right therethat’s AI generated um and so I thinkthat’s actually one of the severalreasons why some of these technologystandards are so important and canhopefully be enshrined not just inIndustry agreements but also in many ofthe regulatory conversations that arehappening because there is there is aworld in which we can and I think it’sreally important to retain fundamentalencryption standards while still makingmaking sure that we are doing our duediligence and our responsibility toprotect the broader informationenvironment right well certainly thoughthere are things that meta can do tokeep these kind of messages from goingviral even while protecting encryptionum uh Yasmin talk a little bit I’m goingto ask you both to talk a little bitabout what uh Google is doing to uheliminate therisks uh and stop the spread of AIgeneratedelection uh misleading information yeahI’ll quickly just this this idea of umat the you know at the originorigination of the content trying tokind of stamp it in a way that isenduring so that it can be identified asum as as synthetic is really importantand that’s ongoing work one of thethings that I think is interestingactually is that um it’s kind ofactually just refusing to provide theGen service when the stakes are as highas they are when there election queriesso now it’s a kind of like a it’s a newit’s it’s you know and a lot of peoplekind of understand intuitively thatthere’s a tension for technologycompanies in wanting to um make the theexperience for the user safe um but notcreating so much friction that theydon’t want to use the product um so it’sinteresting for example now if you go toGoogle’s generative AI product which isGemini and you search for somethingelection related um it will give you anonan which is actually a pretty crappyfeeling you know um but they send you tosearch instead they say go to search andthere’s research by part and researchthat that shows that people want anauthoritative Source I think this thisinteresting thinking about this tensionbetween Authority and authenticity youknow like those are the mental modelsthat we have from um from the lastdecade of search and social media it’slike if it’s coming from an institutionthat I trust or even Google search youknow I’m I’m there’s a lot of trustthere so the stakes are really high youbetter get it right um or if socialmedia if it’s coming from my friend umthey’re in my social network then I themof course generative AI is neither ofthose it’s not authoritative it’s notsummarizing what the internet says andgiving you this distillation ofsomething that’s authoritative and it’salso sounds like a human but it’s not ahuman that you know um so I think we’rein an we don’t have mental models todeal with um with generative AI outputand at the moment you know I I thinkit’s an interesting demonstration of acommitment to to trying to put electionIntegrity first is actually giving usersa pretty bad experience of thegenerative AI so defaulting to sendingpeople to search which gives it morereli while you’re still sorting this outwe are quickly running out of time soClint uh just tell us what uh Microsoftis doing we’re going to get him thistime so get I mean look the okay ourwork here is done justconceptually the Russian concept ofreflex and control if you’re familiarwith it is you conduct an attack on youradversary and then they attackthemselves in response that’s somewhatwhat has happened over the last 10 yearsthey’re winning through the four ofpolitics rather than the politics offorce there are more than three nationstates that will probably do some sortof election influence and interferencemy team is designed to focus on theactors Russia Iran China you knowabsolutely you’ll see that in ourNovember report we have another reportcoming out that’s election focused onthis one I think the key point is thatyou have to raise the costs on theadversary at some point rather thanraising the cost on yourself to functionas a democracy and so there are lots ofthings we can do in policy and Tech andwe inform those at Microsoft and we dodata exchanges amongst ourselves butultimately we’ve got to say there’s ahack here there’s a leak here and it’scoming and we’re anticipating we’regoing to be out in front of it the nexttime so it’s inoculating the public it’sinoculating the public it’s also raisingthe cost for actors to do that sometimesthat is methods and platforms you knowcommunicating so we in include controlsbut a lot of it is awarenesscommunicating to the Europeangovernments communicating to the USgovernment this is what we’re seeingbecause we can see it better often timesfrom the private sector than the publicsector so you are sharing thatinformation we do yeah if it’s if it’ssomething that’s impactful our nationstate notification system do it okaywell we are out of time so thank you somuch we could have gone on much longerappreciate[Music][Music]it[Music]please welcome to the stage Colombiasepa Professor Anashiffrin thanks everybody it’s so goodto be here I’m mana shiffrin and Idirect the technology and mediaspecialization here at Sea where we areall a buzz I see a lot of our studentsare in the room we’ve been talking allyear about Ai and the elections anddisinformation and it’s really beenfantastic to have the secretary here aswell as our Dean and Maria ressa and somany of us who are involved um we’vereally been yeah so this builds verynicely on some of the other events thatwe’ve had um I was lucky enough to getinvited to Vivian Schiller’s um Aspenevent in Miami in January Tom Asher wasthere as well where we laid out it wasincredible wakeup call what the threatswere so we heard a lot of what we’vebeen hearing about today that audio deepfake would be a real threat in the USthat there would be certain pain pointsduring the election such as the countingum would be really dangerous and orrisky and then also um alandre Nelsonand Julia Angwin worked with igp andbrought together election officials fromaround the country and they came here inFebruary to game out scenarios withMaria ressa and they were the kind ofpeople like darl Linden bound they wereso meeting them was so emotional and soinspiring and to hear the stories of thedeath threats and everything else so I’mreally glad that for this panel we’returning a little bit to um theinternational situation and as we wereall preparing for our classes um inDecember and January we were readingabout how the 85 billion people weregoing to have elections this year indozens of countries and you know webetter really watch out the US is notjust the only place where this ishappening um and so I was just thrilledthat um igp decided to bring in someInternational voices to this discussionbecause I think we have a lot to learnum so we’ve got Ethan 2 from AI labs inTaiwan and of course Taiwan has thereputation for being really good at allof this right it’s like you know you yougot it you had Audrey Tang during thecovid pandemic you’ve got all thatpublic diplomacy and we all know you’reused to China so we’re looking forwardto your expertise um Javier Palo mlejust got elected in Argentina so we’regoing to have to hear about howdisinformation played into that or notthis is obviously a country that’sextremely polarized um so no surprisesthere and this is the first time thatI’m meeting Dominica haju and I thinkyou’re going to be bringing in theperspectives from Central and EasternEurope so I think maybe I think yourelection was first Taiwan right Ethan somaybe we’ll start with you how how didit all play out did you have the sameproblems that we keep hearing about fromthe other panelists uh yes um so I canuce my Institute a little bit so we areTaiwan we are the very first open AirResearch Institute in Asia uh when opena say they are young we are a little bityounger in open AI uh so what we do iswe do a transparency responsible trustAI evaluation not including theinformation manipulation so for exampleduring the pandemic we use theartificial intelligence to know is therea true account on the Internet ismanipulated information against Taiwanand during Taiwan presidental electionwe can uh observe billions of activitythat F the uh online social media andlike including like Facebook Twitter uhPTT Tik Tok mainly a threat from Chinaum or internal too uh in we we have PTlast internal Taiwan prformance Fun byme in 1995 and Facebook of courseTwitter is also one of the majorprefence in Tai one and Taiwan peoplealso look into for examp like WeChatTwitter uh Tik Tok that’s a big topicrecently um so in Taiwan we also uh wealso observe the informationmanipulation on this social media allcross BR uh during the election there alot of TR activity which means that isthe Facebook just mention is a collab inAED Behavior soif we use artificial intelligence we canidentify those people actually they arenot realhuman they appear together disappeartogether dis simul of force informationtogether and they like to like referencethe video that show video is a trendingtopic this year um in the past we cansee a lot of information man in Tex butthis year we see a lot of show video andshow video have the show video on theYouTube also have the show video on TikTok but usually the show video on theYouTube was orig fun Tik Tok that’s likevery interesting so just what we werehearing about about cross platforms anda lot of video and audio as well couldyou tell us the source were you alsoable to track down the source of thisyeah so uh using the artificialintelligence so so we know there defectso like there are a lot of video theyhave the same narratives M but they usethe they use different backgrounddifferent no but they are playing thesame show and they F into like YouTubeand the Tik Tok platform now try toinfluence how people feel uh in taian umso so uh so we use the artificialintelligence so we use speed recognitionlanguage understanding then by identifythe TR account then we know those TRthey are not real human sure then we cancluster the story they are trying tospread yeah so during the election uh wecan carry understand for example we knowwhen was the uh uh discrimination Forceamong Le for example Le was cing uh whenthe Taiwan President Rel United Statesthat was a very first uh uh hugeactivity of the TR activity happeningand another pick is uh when Joe Bidensaid uh uh when Taiwan is under St theywill Endor Taiwan then there we see aspec or information manip is saying thatunited state is helping Taiwan todevelop b weapon they try to destroy thenarratives and also so is this basicallyChina’s The Source you’re I feel likeyou’re uh uh yes according to ourunderstanding they are a lot of theythey will the TR account and the socialmedia they will try to emphas as themrix state and CH and if we you lookinto the uh State affili U media so wewe can compare we use artificialintelligence we can group the samenarrative together then we can see the taccount on the Facebook uh Twitter forexample they will Echo the narratives onthe China State media you can see how itspreads oh great well very interestingthat you’re able to do that kind ofdetection work and I know you’ve put outsome really interesting reports reportsthat everybody can read who’s interestedDominic I wanted to find out from you isthis does this sound like what you’reseeing in your part of the world sort ofvideo narratives kind of being spreadout from State actors uh audio what whatwhat’s the sort of state of play wherewhere you are yeah so um also just tointrodu tell us about your organizationas well i’ love some more details tointroduce myself so I come from Globsit’s a think tank which was founded umin Central Europe we were founded inBratislava Slovakia um but we cover umbasically all countries um in centraleastern Europe and we now have officesin KF in Brussels and in uh DC and um Iam leading the center for democracy andresilience and we were founded in2015 so right uh shortly after theannexation of Crimea and the invasion ofUkraine because we started seeing thefloods of information manipulation anddisinformation across central EasternEurope primarily coming from from theKremlin and at that time of course thiswas mostly limited to a few pages umthat that spread Pro Kremlinpropaganda um and it was very visiblyPro Kremlin or Pro Russian um but thenof course um as it has been mentionedduring the first panel I think um thetactics of course have evolvedtremendously first of all um theespecially the Kremlin in in the contextof Central and Eastern Europe has beenable to build the networks and theproxies right so right now um and thevice president mentioned it of theEuropean commission it’s not that muchabout the Kremlin um interferingdirectly but it’s it’s through thedomestic actors political actors umwebsites social media Pages etc etc andI come from Slovakia and we hadelections in September and it was apeculiar case because we could actuallysee both direct and indirectinterventions from the Kremlin direct uma lot of uh the countries in the EU havethis um political campaigning silencewhich is which means that from one dayto two days prior to the elections youcannot do any campaigning basically umand during this period there was a pressrelease um by the Russian press agencysaying that the US was going tointerfere in in the elections um bydoing everything they can to for um apro-democratic Progressive Party to winand just that you are aware how it wasis that two parties one rathernationalistic populist with some verystrong Pro Kremlin figures was runningfirst and just and um Progressiveliberal Pro Democracy party was uhrunning second um so this was releasedand among a very similar time a deepfake audio and and and and a syntheticmade audio was also uh released ontelegram um by an account which wasprobably a wife of a um Slovak politicalrepresentative who is currently beingprosecuted for spreading Russian warpropaganda um so attribution in thiscase is quite difficult um but um thisdeep fake has spread uh through telegramon Facebook um and um has had thousandsof shares um on on on Facebook despitethe fact that it was quitea lousy audio that if you listen to itcarefully you would actually see that itwasn’t that it that it wasn’t truth theproblem is that despite it the fact thatit’s quite a small case and quite asmall country it draws several importantlessons first isthat we really need some red lines clearlines um to made when it comes togenerative AI prior to the electionsbecause I do agree that labeling isimportant and water marking labeling isa way to go definitely um but what ifthere is a Content spread 24 hoursbefore the elections and it’s made by aKremlin based or Beijing based companywhich doesn’t require such watermarkingbecause this is going to be a consensusamong the Western um based companies umis there um an ability to stop this ifthis is 24 hours prior to the electionsum are we going to ban it are we goingto take it down I think that there arealso some red lines that have to bedefined and I think the Michigan law canbe Could Be A Way Forward um second isthat these measures also have to be uhclearly defined for for social mediaplatforms because what has happened umin Slovakia specifically is that therewere around 70 pieces of AI generatedcontent uh identified and around half ofthem stayed online 15 were taken downandif 13 were labeled or something likethat so there is quite an inconsistencyin treating these these these cases andof course we need to treat some of thecases on on specific on a specific basiswhether they um whether they whetherthey are uh talking about electionmanipulation um or on the the thenarratives of riged Elections whichactually most of these deep fakes weretalking about so this is the commontacticum that has been I hope we’ll have timeto talk about regulation but I knowJavier was saying you know we’veobviously some of our alumni a lot ofpeople have been very involved intracking Russian disinformation in SouthAmerica and um I’ve heard you I you weresaying Javier that actually that’s notreally the problem in Argentina sothat’s quite interesting what is theproblem I know melee won with a lot ofYouth vote I guess all they very highunemployment and inflation has beenupsetting people quite a bit exactlywhat’s uh what’s economics and what’sinformation and yeah well that was oneof the main things that we we observedum in my previous work I worked at anorganization called axis now I used tobe the the global director of policythere and we were able to see how theseissues evolve around the world right andit called my attention in Argentina thatum there were quite quite specificcharacteristics that I haven’t seenanywhere else right and um for exampleone of them is that we found when I didI also was working on on an independentresearch on the Argentinian elections wehaven’t found any specific like um clearevidence or at least an initial evidenceof foreign intervention most of whathappened with the online social movementthat brought melee to power was quiteorganic we he didn’t see much of fakeaccounts or troll centers we didn’t seemuch of uh foreign uhIntervention which was very interestingyou know in the sense of that as youmentioned before Argentina is a countrythat has been divided for a while now umyou know it they started with karismcouple of um couple of presidentialperiods ago you know and this kind oflike internal political division wasreally really strong from the beginningso when the uh one of the candidatesappears and makes a proposal that goesagainst the status quo there’s anotherlike half of the country is ready toengage Eng yeah and there’s a need toengage there’s something that wedetected which is very very clearly aResonance of certain kinds of messagesand words and ways of communicatingonline that resonated with the people soI would say that the main aspect of thecampaign from m is the sentiment ofanti-politics people are not onlydisenfranchised or um disillusioned withpolitics they are offended with politicsand with politicians you know it’s apersonal thing it’s a really a reactionmovement and these people who don’t talklike politicians don’t look likepoliticians you have seen his you knowhis looks or the way he conducts himselfand so on they are really reallyattractive for the kind of people andespecially younger people who as wasmentioned before by by Mrs Green thatyou know they have a different way ofunderstanding information and this ideaabout moving from institutions to peopleas the the source of um of um ofauthority is something that that that isreally resonating with the with the withthe population and it’s easy tounderstand for example in Argentina ourour military dictatorship lasted untilthe 80s which in Internet times is uhthe Middle Ages but in historic times isyesterday right so this idea of nottrusting institutions the governmentbeing a potential you know source ofOppression and violence and also at thesame time this idea of like uminstitutions that are really young thathaven’t had the time to reallyum uh you know get um a good basis inour society together with corruption andother kind of things it’s a it’s aterrible mix that it’s ready it’s it’slike the fertile ground for any of theseyou know populist leaders to just appearand and have a lot of following so Iwould say that that was one of the keypoints of course the lack of regulationfor for platforms is a problem countrieslike Argentina are second class citizenssecond class users for some platformsand that’s what I also wanted to talkabout is how much agency do you have youknow it’s it’s great to hear all thecompanies talk about the new standardsbut let’s face it if they voluntarilystarted doing content provenance theywould set the standard for everybody soyou know sitting around saying I’mwaiting for regulation you can alsomodel with best practices and then I’mthinking precisely I mean we we knowfrom all the reporting that’s been doneum including by many of the people inthe room that that your countries youknow have less moderation they havenobody to call their you know minoritylanguages aren’t properly you knowmoderated or looked after so I’mwondering your perspective on and Iobviously Dominic will talk to us aboutthe Digital Services act but are yougoing to sit back and wait for the bigcompanies to change their policies tostart doing con contentauthenticity um what can you do on yourown and I feel like you know Brazil hasreally been leading the way for LatinAmerica in terms of Regulation but othercountries haven’t been um and I’mcurious to know whether you knowregulation comes up in Taiwan as wellbecause it’s not something I know thatmuch about do you want to go first andthen we’ll hear from Ethan and thenwe’ll talk about DSA or so so when wetalk about regulation Taiwan ever had afail case try to regulate the platformbut uh being fat they say people thereare information man say he would goagainst freedom of speech um gotcha soit’s like the US yeah so so actuallyTaiwan we just recently publishedinformation manipul about Tik Tok maybeyou can go to our website in V look intoL and L narrative is pretty similar whatwe happen in ton before could you tellus like what um what regulation had beenconsidered and what were the forces thatdefeated it so so so I would say uh ifwe go to the direction of f checking andcontent moderation that dire will have alot of challenge because people will saynot against the freedom of speech yeahand also fake check what is fake thenthat will be a lot of debate yeah so soin Taiwan instead of we so now we areinste we are talking about F checking orcontent moderation we are talking abouthow we can disclose the informationmanipulation right of course so I’mbeing given the the sign that we onlyhave five minutes but this is of coursewhat’s happened in the US right we weretold you can’t regulate but we can atleast do media literacy and factcheckingand then it turned out that eventeaching media literacy wascontroversial and all the researcherswho are doing the tracking are all nowgetting subpoena so that just like inthe 1930s when Columbia was alsopioneering where the space is gettingpushed so that’s really interesting I’mdefinitely going to go to your websiteas soon as this is over have you anyconversation about regulation then we’regoing to finish optimistically with theDSA very quickly um I think that anotheruntapped resource Brazil is a greatexample another unti untapped resourceis the interamerican system for humanrights the freedom of expressionstandards contained there are widelyaccepted across Latin America and acrossAmerica in general it’s a very good mixbetween a more umregulatory strong strongly orientedstance from the European side and themore uh allowing let’s say FirstAmendment standards in the US so I thinkthat there’s an interesting middle wayto work there there is judis Prudencefrom the inter American court on humanrights for example on indirect means ofaffecting freedom of expression one ofthem for example could be the undulyinterference you know that some of theexternal actors or sometimes socialmedia companies themselves do on thediscourse of people so there’s a lot toto grow there and of course there’s alot to do in terms of electoralregulation modernization of thebureaucracy of the Electoral commissionsgiving more power more agency to thembado for example was stopped very veryquickly by the Electoral authorities notby interestingly he’s been banned fromparticipating for really much longerthan and this is all electoral uhregulation it’s not cont let’s letDominica get the last word cuz I’m Iknow we’re running out of time okay DSAhow do we feel is it helping uh so theDSA uh targets illegal online speech andI think this is a very umpowerful um legislation um in a way thatit doesn’t Target disinformation becausethere you’re on a very risks coming upwith a plan but when it comes to Illegalspeech I think that it is makingprogress because um there are actuallyrequirements for the platforms to issueregular reporting which is helping usand per country basis so this isimportant because in languages likeDutch Slovak Czech Hungarian youactually have to see what has been doneum um because we didn’t have thisinformation before so in this sense it’sit’s it’s really good and very when doyou think it will start to kick inbecause I know that the differentcountries are still Staffing up yeah ithas uh it has started already no I knowbut when will we all notice it oh wellokay so there are already reports out soyou can already check those out um so soif you do a bit of research you willnotice it um but in terms of uh on theplatforms for example you can alreadylike um um report illegal content whatI’m worried about is the platforms thatare not Cooperative so if there’s somany so much exchange of informationbetween Facebook Microsoft uh Googlethat’s amazing but then what abouttelegram for example yeah right which isthe source of extremism and also ProRussian propaganda and all the migncontent very much so and there’s been somuch so much interesting stuff anyway wecould go on all day but I certainlydon’t want to get in the way of the nextpanel which is going to be reallyinteresting so thank you very very muchJavier and Ethan and Dominica andhopefully we’lltalking okay go down thatway[Music][Music][Music][Music]please welcome back to the stagesecretary Hillary rodm Clinton andjoining us virtually igp Carnegiedistinguished fellow EricSchmidtEric um first we are so delighted tohave Eric Schmidt with us um especiallybecause he is as you just heard uh oneof our Carnegie distinguished fellows atThe Institute of global politics and uhhe has been uh meeting with students andtalking to faculty about a lot of theseAI issues uh that we have surfaced uhduring our panels today and of course hewrote a very uh important book with thelate Dr Henry Kissinger about artificialintelligence so we’re ending our uh ourafternoon uh with Eric and trying to seeif we can pull together some of thestrains of um uh thinking and challengesand ideas that we’ve heard so Eric thankyou for joining us you look like you’rein a very comfortable but snowy place umand I wanted to start by asking you whatare you most worried about uh withrespect to AI in the uh 2024 electioncycle well first Madame secretary thankyou for inviting me to participate inall the Columbia activities uh I’m at aTech conference an AI conference insnowy Montana which is why I’m not thereum if you look at misinformation We Nowunderstand extremely well that viralityemotion and particularly powerful videosDrive voting behavior human behaviormoods everything and the current socialmedia companies are weaponizing thatbecause they respond not to the contentbut rather to the emotion because theyknow that things that are viral um areoutrageous right you know CL crazyclaims get much more spread it’s just ahuman thing so my concern goes somethinglike this the tools to build reallyreally terrible misinformation areavailable todayglobally most voters will encounter themthrough social media so the question iswhat are the social media companiesdoing to to make sure that what they arepromoting if you will is legitimateunder some set ofassumptions you know I think that um uhyou did a uh an article um in the MITtechnology review uh fairly recentlymaybe at the end of last year and um youput forth a six-point plan for fightingelection misinformation anddisinformation I want to mention bothbecause they are distinct um what wereyourrecommendations uh in that article toshare with our audience in the room andonline uh Eric and what are the mosturgent actions that tech companiesparticularly as you say the social mediaplatforms um could and should takebefore the 2024elections well first I don’t need totell you about misinformation becauseyou have been a victim of that and in areally evil way by the Russiansum when I look at the social mediaplatforms here is here’s the blunt factif you have a largeaudience people who want to manipulateyour audience will find it and they’llstart doing their thing and they’ll doit for political reasons for economicreasons or they’re simplyneist um there are people who just wantto take down powerful figures becausethey don’t like Authority and they’llspend a lot of time doing it so you haveto have some principles one is you haveto know who’s on the platform and in thesame sense that if you have an Uberdriver you don’t necessarily know theUber driver’s name and details but youcan be quite sure that Uber has checkedthem out because of all the variousproblems they had in the past so youtrust Uber will deliver you a driverthat is at least a legitimate driverright that’s sort of the way to thinkabout it the platform needs to know evenif it doesn’t tell you who they are thatthey are real human beings another thingyou have to know is where did it comefrom and we can technologically putwater marks uh the technical term iscalled steganography where you use anencryption technique and you mark wherethe content came from so you knowroughly how it entered your system youalso need to know how the algorithmswork we also think it’s very importantthat you work on ageg gating so youdon’t have people below 16 and those arerelatively sensible ways of taking theworst parts of it out I think one of thethings that’s happened since I wrotethat article is if you look at thesuccess of red and theirIPO what they did they were reluctantlike everybody else in my industry theywere reluctant to do anything theybrought in a new CEO who shut downentire subreddits of hate speech and itimproved the overall discourse so thelesson I’ve learned is if you have alarge audience you have to be an activemanager of people who are trying todistort what you as the leader aretrying todo that Reddit example is a very goodone because you know I don’t haveanything like the experience you do butjust as anobserver um it seems to me that there’sbeen a reluctance on the part of some ofthe platforms to actually know it’s kindof like they want deniability don’t I Idon’t want to look too close because Idon’t really want to know and then I cantell people I didn’t know and maybe Iwon’t be held accountable um butactually I think there’s a huge marketfor having more trust in the platformsbecause they are taking off you knowcertain forms of content that are umdangerous in however you define that umand and your your recommendations in uhyour article uh Focus mostly on the roleof content uh Distributors so maybe go alittle bit further um Eric and andexplaining to us like what should wethink about and maybe more importantlywhat should weexpect um from AI content creators andfrom social media platforms that areeither utilizing AI themselves or beingthe platforms for the uh uh use ofgenerative uh AI um how how do we thinkabout uh protecting our elections anddoes it matter whether it’s a socialmedia platform a big AI company or evenopen- Source developers is there someway to distinguish that well it’s sortof a mess um as the previous paneldiscussed and the reason it’s a mess isthere are many many different ways inwhich information gets out so if you gothrough the responsibility thelegitimate players the authoring toolsand so forth all have a responsibilityto Mark where the content came from andto mark that it’s syntheticallygenerated that seems kind of obvious inother words we started with this andthen we made it into that and there areall sorts of corner cases like I touchedup the photo well you should record thatit was so you know that it’s an alteredphoto it doesn’t mean an evil way butthat’s an example um the real problemhere has to do with a a confusion overfree speech um so I’ll say my personalview which is I’m in favor of freespeech including hate speech that isdone by humans and then we can say tothat human you are a hateful person andwe can criticize them and they canlisten to us and then we hopefullycorrect them that is my personal viewwhat I am not in favor is uh is of freespeech for computers and the theconfusion here is you get some idiotright who is just literally crazy who’sspewing all this stuff out who we canlargely ignore but the algorithm thenboosts them so there is absolutelyliability on the social media platformsresponsibility for what they’re doingand unfortunately although I agree withwhat you said the trust and safetygroups in some companies are being uhmade smaller and or being eliminated I Ibelieve at the end of the day thesesystems are going to get regulated andpretty hard and the reason is that youhave a misalignment of interests if I’mthe CEO of of a social media company Iwant to maximize Revenue I make moreRevenue with engagement I get moreengagement with outrage so one of theways to think about is why are we sooutraged online well it’s partly becausethe media algorithms are boostingoutrageous stuff most people it isbelieved are more in the center and yetwe focus on and this is true of bothsides every everybody’s guilty so I’m II think that what’ll happen with AI justto answer your questions precisely is AIwill get even better at making thingsmore persuasive which is good in generalfor understanding and so forth but isnot good for the standpoint of electiontruthfulness yeah that that is exactlyum what we’ve heard this afternoon isthat um youknow the sort of authoritativeness andthe authenticity issues are going to getmore and more difficult to uh discernand then it’ll be a more effective uh uhmessage and you know I was struck by oneof your recommendations which is kind oflike it’s it’s a recommendation thatcould only be made at this point inhuman history and that is to use more uhreal human beings to help um and uh it’sit’s almost kind of um absurd that we’resitting around talking about well maybecan ask human beings to help humanbeings uh figure out what is or isn’t uhyou know truthful but how do weincentivize tech companies to actuallyuse human beings and how do we avoid theexploitation of human beings becausethere’s been you know some pretty uh youknow troubling uh uh disclosures aboutthe you know the sort of sweat shops ofhuman beings in you know certaincountries in the global South who arebeing you know driven to make thesedecisions and and it can be quite uh youknow quite overwhelming um so whenyou’ve got companies as you just saidgutting trust and safety um H how do weget people back to you know some kind ofsystem that will make uh the kind ofjudgments that you’re talking about wellspeaking as a former CEO of a largepublic company um companies tendto operate based on fear of beingsued and section 230 is a pretty Broadexemption and uh for those in theaudience section 230 is the is sort ofthe governing body on how content isused and it’s probably time to limitsome of the broad protections thatsection 230 gave there are plenty ofexamples where someone was shot andkilled over some content where thealgorithm enabled this this terriblething to occur there is some liabilitynow we can we can try to debate whatthat is but if you look at it as a humanbeing somebody was harmed and there wasa chain of liability including an evilperson but the system made it worse sothat’s an example of a change but Ithink the truth if I can just be totallyblunt is ultimately information and theinformation space we live in you can’tignore it I used to give the speechwhere I would say you know how we solvethese problem turn your phone off getoff the internet eat dinner with yourfamily and have a normal lifeunfortunately my industry and I’m happyto have been part of that they made itimpossible for you to escape all of thisas a normal human being right you’reexposed to all of this terrible andfilth and so forth and so on that’sgoing to ultimately either get fixed bythe industry collaboratively or byregulation a good example here is umlet’s think about Tik Tok because Tiktok’s very controversial right now it isalleged that certain kinds of content isbeing spread more than others we candebate that um Tik Tok isn’t reallysocial media Tik Tok is reallytelevision and when you and I wereyounger there was this huge fracus overhow to regulate television and there wassomething called an equal time Rule andultimately it was a sort of roughbalance where we said fundamentally it’sokay if you present one one side as longas you PR present the other side in aroughly equal way that’s how societiesresolve these information problems it’sgoing to get worse unless we dosomething like thatwell I I agree with you 100% um in bothyour analysis and your recommendationsand and in the very first uh panel wetalked about uh the need to revisit andif not completely eliminate certainly uhdramatically Revis section 230 it’soutlived its usefulness I mean there wasan idea behind it back in you know thelate 90s uh when this industry was so uhmuch in its infancy um but we’ve learneda lot since then and we’ve learned a lotabout how we need to have some uhaccountability some measuring of uhliability for the sake of the largersociety but also to give the directionto the companies I mean these are verysmart companies you know that you spentmany years uh at Google these are verysmart companies they’re going to figureout how to make money um but let’s havethem figure out how to make a whole lotof money without doing quite so muchharm and that partly starts with uh dedeal with Section 230 um you know whenwe were talking earlier um about U youknow what AI uh is aiming at you knowthe the panelists were all you know veryuh forthcoming and saying look we knowthere are problems we’re trying to dealwith these problems um you know we knowfrom even just the public press that anumber of AI companies have invented uhtools that they’ve not disclosed to thepublic because they themselves assess uhthat those tools would make what is adifficult situation a lot worse is therea role you think Eric um for I knowthere was the Munich um you know the thestatement negotiated at the Munichsecurity conference which was a startbut is there more that could be donewith a public facing statement some kindof agreement uh by the AI companies andthe social media platforms you know toreally focus on preventing harm goinginto the election is that somethingthat’s even you know Fefeble it should be the reason I’mskeptical is that there’s not agreementam among the political leaders of courseyou’re a World’s expert on that and thecompanies on what definition what whatdefines harm um I have wed aroundCongress uh for a few years on theseideas and I’m waiting for the pointwhere the Republicans and the Democratsare in agreement on from their local andindividual perspectives that there’sharm on both sides we don’t seem to bequite at that point this may be becauseof the nature of how president TrumpWorks uh which is always sort ofbaffling to me um but but there’ssomething in the water that’s causing anonrationalconversation it’s just not possible soI’m skeptical that that’s possible Iobviously support your idea um I thinkthe other thing I would say and I Idon’t mean to scare people is that thisproblem is going to get much worse overthe next few years maybe or maybe not byNovember but certainly in the next cyclebecause of the ability to write programsso I’ll give you anexample I was recently doing a demo thedemo consists of you pick a a stereotypstereotypical voter let’s say it’s aHispanic woman with two kids she’s uhyou know she has the following interestsyou create a whole interest group aroundher and she she doesn’t exist it’s fakeand then you ask the computer to write aprogram in Python to generate500 variants of her different Sexesdifferent races so forth and so ondifferent ages and backgrounds whoco-mingle the same voices so the abilityto have ai uh broadly speaking generateentire communities of pressure groupsthat are in fact virtual it’s very hardfor the systems to detect that thesepeople are fake uh there are Clues andso forth but to me this question aboutthe ability to have computers generateentire networks of people who don’texist to act for a common cause whichmay or may not one that you and I agreeon but is probably influenced by theirNational Security for North Korea orChina or Russia or or influenced by somebusiness objective from the tobaccocompanies or you name it um I worry alot about that and I don’t think we’reready um these the it’s possible just tohammer on this point for um the evilperson who inevitably is sitting in thebasement of their home and their mothergives them food at the top of the stairsuh to do this on their computer in a daythat’s how powerful these toolsareokay well um let let’s try to you knowbring it back a little bit to where weare here um at the University in this uhyou know great setting of so many peoplewho uh have a lot to to contribute andworking in partnership with Aspendigital which similarly has a lot ofconvening and uh uh Outreach uh uhpotential um what can universities dowhat can we do in research umparticularly on AI how do we create a akind of you know broad network ofPartners like we’re doing here betweenigp and and Aspen digital um and andthat we begin to try to do what’s Posspossible to um educate ourselves educateour students uh in combating uh Miss anddisinformation uh with respect toelections so the the first thing we needto do is show people how easy it is andso I I would encourage every Universityprogram to have to have studentsactually try to figure out how to do itobviously don’t actually do it but umIt’s relatively easy and it’s reallyquite the eye opener it was an eyeopener for me and I’ve done this youknow for as long as I’ve been alive umthe second thing I would do is there arethere’s an infrastructure that would bevery helpful uh the best design that I’mfamiliar with is blockchainbased andit’s essentially a name it’s a name andorigin for every piece of documentindependent of where it showed up so ifeveryone knew that this or that thispiece of information showed up here youcould then have Providence andunderstand how did it get there whopushed it who Amplified it that wouldhelp our our U security services ournational security people to understandis this a Russian influence campaign umor is this something else so so there’stechnical things and then there’s alsoeducational things I think this is onlygoing to get fixed if there is abipartisan broad consensus that takingthe edges the crazy edges the crazypeople and you know who I’m talkingabout and and basically taking them outI’ll give you an exampleuh there was an analysis in the last umin Co that the number one spreader ofmisinformation about Co online was adoctor in Florida who was like 133% ofall of it and he was very clever he hada whole influence campaign of Liestrying to convince you to buy hissupplements versus getting a vaccinethat’s just not okay in my view and thequestion for me is why was that allowedby that particular social media companyto exist even after it was pointed outso you have a moral framework you have alegal framework you have a technicalframework but ultimately it has to beseen is it’s not okay to allow this evildoctor for profit to allow people to getto essentially to mislead them onvaccinations well just to follow up onon that I mean I don’t at all disagreeabout what has to happen if we’re goingto end up with some kind of legislationor regulatory framework from thegovernment but is there if they were soif they were willing is there anythingthat the companies themselvescould do as I say if they were willingto that would lay out some of the guardrails that need to be considered beforewe get to the consensus aroundlegislation of course but of course theanswer is yes but the what have the waythis actually works in a company is youdon’t get to talk to the engineers youget to talk to the lawyers and thelawyers as you as you very well know arevery conservative and they won’t makecommitments so it’s going to requiresome kind of an agreementamong the leadership of the companies ofwhat’s in bounds and what’s out ofbounds right and and getting to that isis a process of convening andconversations it’s also informed byexamples so I would assert for examplethat every time someone is physicallyharmed from something we need to figureout how we can prevent that that seemslike a reasonable principle if you’re inthe digital world as I am right so soworking back from those principles isprobably the way to get started it’s notgoing to happen unless there’s agreementeither it’s forced on them by thegovernment or there’s agreement by theCEOs the best way to achieve that in myview is to make a credible and detailedproposal of where the guard rails areright and and what it means what what Ihave learned in working on this is youhave to have content moderation when youhave a large community these thesegroups will show up they will find youbecause they’re only goal is to find anaudience to spread their evil whateverwhatever the evil is and I’m not takingsideshere well I think the guard railsproposal is a really good one andobviously you know we here at at igpAspen digital the companies who are hereothers the researchers who are here Imean you know maybe people should take arun at that I mean I’m you know I’m notnaive I know how difficult it is but Ithink this is a problem we all recognizeit’s not going to get better if we justkeep ringing our hands and fiddling onthe margins we have to try somethingdifferent um so let me let me just beobnoxious you know I’ve sat through allthese trust and safety discussions for along time and these are very verythoughtful analyses they’re notproducing Solutions in their analysisthat are implementable by the companiesin a coherent way so here’s my proposalidentify the people understand theProvidence of the data publish youralgorithms be held as a legal matterthat your algorithms are what you saidthey are right in other words what yousaid you do you actually have to doreform section 230 make sure you don’thave kids on so forth um Etc you knowmake your proposals but make them in away that are implementable by the teamso for example if there’s a particularkind of piece of information that youthink should be banned write an outwrite a specification of it well enoughthat under your proposal the computercompany can stop that right that’s whereit all fails because the engineers arebusy doing what they understand they’renot talking to the lawyers too much thelawyers job is basically preventanything from happening because they’reafraid of liability right and you don’thave leadership from the Congress forthe reasons that you know and that’s whywe’restuck well that’s both a summary and achallenge Eric and uh I I forparticularly appreciate that andespecially the work you’ve been doing totry to you know sort this out and givesome guidance so you get the last wordfrom beautiful snowy Montana uh the lastword to kind of you know offer thatchallenge you know ask us to uh responduh to follow up on what you’ve outlinedas at least one path forward and try todo it in a you know a collaborative waywith the companies and and otherconcerned uh partiesum I do this as the snowstorm is hittingin behind me um look I think that U themost important thing that we have tounderstand is This is Our Generationsproblem this is a h this is under humancontrol there’s this sort of belief thatnone of the stuff can get fixed but youknow from your pioneering work over oversome decades here that with enoughpressure you really can bend the needleyou just have to get people tounderstand it these problems are notinsolvable this is not quantum physicswhich is impossible to understand it’s arelatively straightforward problem ofwhat’s appropriate and what’s not the AIalgorithms can be tuned to whateverSociety wants so my strong message toeveryone in in Colombia and of courseall the partners is instead ofcomplaining which I like to do a greatdeal why don’t we collectively writedown the solution organize partnerinstitutions try to figure out what thehow to get the people in power to sayokay I get it right that this isreasonably bipartisan it makes Societybetter there’s this old rule aboutgresham’s law which is that that badspeech drives out good speech which iswhy the internet is a cesspool um I usedto say that and I would say and since Idon’t like to live in a Cess pool justturned it off so the problem you haveand this is especially true for youngyoung people um the damage that’s beingdone online to women and so forth It’sjust horrific why would we allow this inmodern society we can fix it you justhave to have an attitude uh I’m tryingto fund some open source technology inthis area that are better tools todetect bad stuff uh it’s going to takeit’s going to take some a concertedeffort and I really appreciate your uhsecretary your your attention on thissomebody’s got to push well you and Ilet’s let’s keep going Eric and I’m sograteful to you and uh I I hope you havea great time in the snowstorm and uhwhatever else comes next but let’s showour appreciation Derk Schmidt for beingwith us thank you somuch thank youall well I I think we have a call toaction we just have to get ourselves inthe frame of mind that we’re willing todo that and even writing something downwill help to focus our you know mindabout what makes sense and what doesn’tmake sense so we’re not going to let youall off the hook we want to come back toyou we want to have something come outof this we can talk about this meetabout this till the cows come home umbut in the meantime as Eric said and Iagree it’ll just get worse and worse andwe have to figure out how we can assertourselves um and maintain the good andtry to deal with uh you know that whichis harmful so please join us in thiseffort and as I say we will come back toyou and uh seek your guidance and yoursupport thank you all very[Applause][Music]much[Music]
With billions heading to the polls this year, the stakes for global democracies are substantial. Generative AI advances and broad accessibility are already reshaping sectors with exponential growth expected to continue. How will these tools change the information ecosystem and how will governments, campaigns, and the public respond to new challenges?
Against this backdrop, the Institute of Global Politics at Columbia University’s School of International and Public Affairs and Aspen Digital gathered political affairs experts, tech executives, leading academics, and public servants for an afternoon of discussions examining how AI has played a role in the elections that have occurred already in Taiwan, Argentina, and Slovakia and more and what that means for elections later in the year.
Agenda
Time & Session
Speakers
1:30-1:45pm – Introductory Remarks
Keren Yarhi-Milo Dean, Columbia School of International and Public Affairs Adlai E. Stevenson Professor of International Relations
Vivian Schiller Vice President and Executive Director, Aspen Digital
1:45-2:15pm – Risks to the 2024 Global Elections
Hillary Rodham Clinton Professor of International and Public Affairs, Columbia University 67th Secretary of State and former Senator from New York IGP Faculty Advisory Board Chair
Věra Jourová Vice-President for Values and Transparency, European Commission
Maria Ressa Nobel Peace Prize-Winning Journalist Cofounder, CEO, and President of Rappler IGP Carnegie Distinguished Fellow
2:15-2:45pm – Lessons Learned from Recent Global Elections
Dominika Hajdu Policy Director, Center for Democracy & Resilience, GLOBSEC
Javier Pallero Digital Rights Researcher and Activist, Argentina
Anya Schiffrin(moderator) Senior Lecturer in Discipline of International and Public Affairs and Director, Technology Media and Communications specialization, Columbia SIPA
2:45-3:15pm – The Role and Responsibilities of Tech Companies
Michael Chertoff Former Secretary of Homeland Security Co-Founder and Executive Chairman, Chertoff Group
Dara Lindenbaum Commissioner, Federal Election Commission of the United States
Anna Makanju Vice President of Global Affairs, OpenAI
Hillary Rodham Clinton(moderator) Professor of International and Public Affairs, Columbia University 67th Secretary of State and former Senator from New York IGP Faculty Advisory Board Chair
3:45-4:15pm – Spotlight Interview
Eric Schmidt (joining virtually) Cofounder, Schmidt Futures Former CEO & Chairman, Google
Hillary Rodham Clinton(moderator) Professor of International and Public Affairs, Columbia University 67th Secretary of State and former Senator from New York IGP Faculty Advisory Board Chair
4:15-4:30pm – Closing Remarks
Hillary Rodham Clinton Professor of International and Public Affairs, Columbia University 67th Secretary of State and former Senator from New York IGP Faculty Advisory Board Chair
{"includes":[{"object":"taxonomy","value":"131"}],"excludes":[{"object":"page","value":"203696"},{"object":"type","value":"callout"},{"object":"type","value":"form"},{"object":"type","value":"page"},{"object":"type","value":"article"},{"object":"type","value":"company"},{"object":"type","value":"person"},{"object":"type","value":"press"},{"object":"type","value":"report"},{"object":"type","value":"workstream"}],"order":[],"meta":"","rules":[],"property":"","details":["title"],"title":"Browse More Events","description":"","columns":2,"total":4,"filters":[],"filtering":[],"abilities":[],"action":"swipe","buttons":[],"pagination":[],"search":"","className":"random","sorts":[]}