good morning everybody thanks forjoining ustoday thanks everyone for being herewe’re really excited about the programtoday um sorry we can’t be here uhtogether in person but um as a snow guyI’m really actually pretty excited to belooking out my window and seeing somesnow in DC finally um I do want to takea moment to to thank and and recognizeIsabella sento and Beth seml from ourteam who did just an amazing jobtransitioning us to a virtual eventseamlessly so thank you for yourexpertise and professionalism um turningto the subject at hand I don’t think Ineed to tell anyone uh how AI has reallycome to dominate the public conversationtoday you pretty much can’t avoid it butthere is a lot of fud of disinformationeven some misinformation out there andtoday we’re hoping to cut through someof it um particularly focused on thecyber security angle we also want totalk a bit about two papers we recentlypublished from the Aspen digital cybersecurity program one is focused on theend users of these new AI toolsparticularly what they can do today asthey’re installing these tools to try tomake sure they’re not introducing newCyber risks into their environment andanother that looks at how governmentsaround the world should approach thegovernance uh specifically of generativeAI big topic of of discussion todaybut we’re going to start here in theUnited States with the landmarkexecutive order on artificialintelligence that President Biden signedat the end of October and uh to that endwe’re really fortunate to have with ustoday Ben Buchanan the White Housespecial adviser on artificialintelligence he’ll talk with us aboutthe EO previously Ben served as thedirector for technology and NationalSecurity at the National SecurityCouncil and is currently on leave fromhis professorship at GeorgetownUniversity then has written three bookson AI cyber sec and National Security soit was uh we are all very fortunate tohave been uh in the seat he was in whenAI really boomed as as an issue um hewas the driving force behind theexecutive order 14110 on the safe secureand trustworthy development and use ofartificial intelligence Ben thanks foryour effort on that and thanks forjoining ustoday thanks very much for having meJeff it’s good to be back with the theAspen crew I I also like uh looking atsnow outside of my window I think theonly thing even nicer is looking at snowoutside my window when I’m with everyonein Aspen so we’ll have to do that nexttime but this this is great Ben remindsme I left out one of the key elements ofhis resume he was a member of our Aspencyber security group before he uh wentinto government service um so when wetalk about the EO Ben before we talkabout the what I wanted to maybe talkabout the why you know beyond theobvious that that it’s it’s all over thenews what was the thinking behind itwhat ends were you trying to get at umand how do you think an EO can actuallyreach those given the limitations ofwhat executive orders cando it’s a it’s a really importantquestion because I think certainly youdon’t get an executive order this longand this lengthy if it were just a PRexercise so there’s a lot of substancethat that drives us it’s it’s probablyworth saying the EO is not the firsttime this White House took action on AIthere a lot that that we’ve done overthe previous years I think well a lot ofthe world was taken by storm with chatjpt and like I think we saw it as aspart of a broader AI um story whichlarge language models are a keycomponent but just one um uh component II think the executive order is meant tobuild on a lot of that foundational workand meant to position the United Statesin a position where it’s not uhfollowing and and not lagging uh in theage of AI and I think one of the keythings that resonated with the presidentresonated with the team at the WhiteHouse is this is maybe the firsttechnological paradigm shift that is notat its for driven or or built on afoundation of government work so if youlook at the rise of a nuclear age or aspace age or even the early days of theinternet the microprocessor go farenough back you see railroads toothere’s a very strong government hand uhin those technological Paradigm shiftsand this is one that’s a little bitdifferent uh and I think the president’sview was it’s great the project Checkeris leading it it’s great all Innovationis happening we need to make surethere’s an appropriate role forgovernment and we’re not being leftbehind by the pace of change I thinkthat more than anything is a Genesis orwas a Genesis for the executiveorder that last Point that’s I hadn’teven focused on that how much howdifferent the origin of this shift is soum maybe some if we have time we’ll canCircle back to um the EO itself has theeight guiding principles and prioritiesI think it calls them um I know we’renot supposed to have favorite childrenum but is there one of the eight thatyou think is the most important forpeople to be focusing ontoday we definitely don’t have favoritechildren in this business it is worthsaying that none of this happens withoutgovernment talent and there’s the lastof the the eight key sections that youmentioned is about getting a lot oftalent into the government and herewe’ve been blown away by the amount ofinterest and I think there was a talkingpoint for a long time of oh no one goodwants to work for the federal governmentand how could the federal governmentever get AI Talent or even what I justsaid of look the private sector isdriving the train here and we have beenoverwhelmed uh since we launched ai.goov or relaunched AI .g on the fallwith tons of AI jobs we’ve beenoverwhelmed by the number ofapplications to to those jobs thingslike the US digital service and the likenow we’ve got a long way to go andactually bringing those people in uhespecially when it comes toAppropriations and and scaling up theworkforce that we need to to do but wewe’ve done a lot already aroundauthorities and posting jobs and andgetting people in um to I think enableus to do the rest of the things in theexecutive order and if we don’t havethat federal government Talent we knowwe won’t succeed in everything elsegreat um are there any of the of theeight principles and priorities that youthink we can make the most progress thequick us on I know like governmenthiring notoriously slow I know therehave been a lot of efforts um but eitherthat one or other ones like where do youthink we can have the biggest boom uhbenefitquickest one of the places where we knowwe have to move fast is on a lot of theSafety and Security issues so here westood up the AI safety Institute at nistin the Department of Commerce you know aplace you know very well Jeff uh andit’s a it’s meant to set standards forAI safety across the board we’ve alsoused the defense production act to umcompel companies to share their safetytest results with us prior to makingtheir their systems public um and weused an executive order uh I think againyou know quite well the infrastructureas a service executive order uh to uhcompel uh the disclosure of foreigntraining runs on soil so the pla can’tjust use us computing power to train uhan AI system on us soil with American uhAmerican Cloud compute all of thosethings the DPA action the infrastructureservice action all those things are on a90-day clock so those will come due atthe end of January and I think we’regoing to hit those deadlines uh to totake action there and then the standardsand the like the first cut on thosecomes due in July so we we’ve definitelyuh had an aggressive set of actions hereand the EO is pretty aggressive acrossthe board in in fast timelinespla you mean China the people’sLiberation Army yes that’s right sorryfor the lingo yeah no problem um the uhyou know you mentioned nist do you youknow one of the if you look historicallyat the public efforts nist has done onCyber the cyber security framework earlyin the last decade became an extremelycollaborative event I was in the privatesector so I saw um how how closelyengaged are you getting that kind offeedback and engagement are you hearingfrom n and others that industry isleaning in and wanting to work withgovernment on these on thesedocuments yeah across the board and evenbefore the executive order as you knowwe rolled out the voluntary commitmentsfrom 15 companies where the the CEOs andleaders of those companies came to theWhite House to of pledge to thepresident this is what they were goingto do and that was born I of a desirethat even with the absolute break nextfeed at which we push this executiveorder it just takes months to getthrough that process and uh we could getmore from the private company as quicklyuh over the summer uh in the interim sothat was that was a start of ourcollaboration with industry and it’scontinued not just with industry butalso Civil Society more broadly n as youknow something called the Consortiummodel which they posted a request forapplications to the AI Consortium Idon’t think they’ve announced who’s init yet but I think it’s fair say you’llsee a lot of private sector companiesand a lot of the most meaningful ones inthe space great um you know you talked abit about implementation the firsttrunch of work to uh I think you saidJanuary and then July there are Myriadstandards guidelines and best practicesI saw you know 270 days in there in abunch of places 90 days as you talkedabout um let’s say they all get done umyou know assuming that nist and othershave the resources what’s next like howdo you see the government and theprivate sector taking those documentsand turning them into the improvementsin in safety security andtrustworthiness ofAI I think that the documents will beiterative the technology is moving sofast that it would shock me if we haveone set of safety standards and it itlast you know for a decade so I thinkthere’ll be some iteration there andthat iteration will be done uh with theprivate sector and obviously weappreciate the value of consistency forindustry so I don’t think it’s going tochange dramatically but I think with thetechnology that’s moving this quicklythere will be some continued iteration II doubt we we’re one and done there moregenerally though I think there’s aconversation with the Congress that atsome point will happen obviously this isan election year so there’s tons ofcomplexity there but 2024 2025 I thinkwe’re going to have to decide what iswhat does AI legislation look like andwe built a foundation uh in the EO verydeliberately for a lot of this butthere’s things that only the Congresscan do and I think we’re going to haveto have that conversation and I’m sure alot of our private sector colleagues aregoing to be a part of that so if I canyou know that’s really an importantpoint I think that at least in myexperience a lot of people don’tunderstand about an executive order youare strained by The Authority thepresident either has through theConstitution or existing law you cannotgo out and create new Authority or putnew mandates that don’t have thatunderlying uh legal basis and I’m sureolc office of legal council spent sometime with you making sure there was afoundation for everything you weredoing yeah and I olc spent probablythere was more time spent with a orderthis complex on thelegal uh you know mechanisms and thelike than there was on actually the textitself writing text the first J of textitself it’s because you know they theyreally work through thousands of editsand they spent at least a month or twoon this and they really did a terrificjob under a lot of time pressure it’salso worth saying one thing you can’treally do with some very narrowexceptions under executive Authorityspend money or appropriate new money andthat’s why for things like AI talent andthe like we are doing everything we canto loosen up authorities and directhigher authorities and the like but ifwe want to to hire a ton more morepeople to do AI at the end that is goingto be a congressional umresponsibility for sure um so in a fewplaces the kind of bridging on that theauthority basis of an executive orderthe AI does the AI sorry the EO doestalk about mandatory security baselinesor risk management practices um also Ithink there are some nods to potentialregulation you know if you go back towhen cyber security became policy issuethe us at least has taken a very volunAR approach going through some effortsin in 2010 11 12 led to the the N cybersecurity framework it seems like thisAdministration starting with theexecutive order 14028 that came outPresident signed I think in in thespring of2021 um there seems to be more of ashift towards okay we’ve tried that wenow we need to focus more on somemandatorybaselines it seems like the EO picks upon that theme obviously cannot imposenew regulatory obligations but is itaccurate to say that even in the AIspace the administration’s thinking isum voluntary is great but we also needto make sure there are some requirementsparticularly when it comes to criticalinfrastructure yeah I think that’s rightand I think it’s it’s right above aparticular threshold so it’s worthnarrating when I talk about the defenseproduction Act and the safety testrequirements that only kicks in whenyou’re basically spending something likehalf a billion dollars or a billiondollars on a single AI system so wetried to do here is preserve as Dynamicand Innovative an ecosystem uh below athreshold of security concern and thenabove that threshold say okay you’ve gotto share your safety test with safetytest results with us we’ve got to workthrough this and and make sureeverything is is on the up and up herebecause there is no kidding NationalSecurity concern and I think in the sameis true for critical infrastructurewhere we’re trying to have as Dynamic anenvironment as possible but we want tomake sure the appropriate Regulators ineach of those areas are doing their jobsand and updating the regulations toaccount for AI in large part because insome of thosesectors are our view I think is thatclear guidance can enable companies andand infrastructure providers to movefaster and I think this is a case wherea lot of times that we hear from theprivate sector they just want Clarity onon what the rules are going to be andwhat the standards are and you know inthe same way that in the early days ofthe railroad trains with brakes could gofaster because they could also slow downand and uhmanage their speed uh it’s it’s probablythe case here that the right set ofStacy standards like done incollaboration with industry and civilsociety and all that enables us to movefaster as a country in the age ofAI um on have you had any engagement atthe state and local level I know somestates are looking at whether toregulate often times states canlegislate more quickly hypothetically ifCalifornia were to pass a law it wouldhave National and maybe Internationalimpacts um have you got any responsefrom from state level AI commissions andexperts um either good bad indiff to thework that you did in in theEO yeah actually a fair amount and we’vehad good conversations with folks Idon’t think we’re at the point ofparsing legislation at the state leveland the like and and so forth but Ithink there’s been a lot of interestfrom States saying a thanks so much forlaying out template for us we see we canbuild upon it B how can we help or Cwe’re just trying to get our our headsaround this uh especially a smallerstate that might not have a huge AI ortechnical Cadre in their in theirgovernment where can we start and andwe’ve got as you know the um the officesin the White House that deal with stateand local and we’ve had goodconversations through them on a numberof theseissues I want to talk about one of thethe regulatory issues that’s come upglobally really um transparency is a bigtalking point making sure people knowthat they’re interacting with an AI botor otherwise seeing AI generated contentor having some Aid driven interaction inthe two papers I mentioned early thatthat the Cyber program Asen cyberprogram put out we touched on both ofthem this what we talked about beingintelligently transparent not justpushing notification for notificationsake and our concern was if people areflooded with disclosures it’s going tobecome meaningless it’ll become aclickthrough I think a lot of thegdpr um General data protectionregulation out of out of the EU hascaused a lot of requirements to clickit’s I would question how much valueit’s added to the privacy of ofindividuals um you know there’sdefinitely a balance I’m wondering if ifyou have view that as a valid concern wehave to do this transparency in a in away that it means something topeople yeah this was actually somethingthat came up um I think it was one ofthe earliest meetings we had on thesubject which is it’s meaningless if adialogue box comes up and says thisphoto was created with a computer andyou know this photo was created withwith this audio was created with Le thecomputer and you can imagine a world inwhich AI becomes so ubiquitous that wehave to figure out what is actuallymeaningful and what are we disclosingone of the things we have looked towardsas we’ve done this that took a long timebut I think works very well is um thethe cryptographic htps standards incyber security where you I think a lotof people now know what the green checkmark means in their browser and it’sunobtrusive but it’s meaningful and thatpeople never look for it and all that soI think there’s instances oftransparency done well that we want touh mimic but it’s also the case thatthere’s a lot of complexity here and weneed to figure out what exactly we aredisclosing when and how and there’s beena lot of work started by the byconsortion like um the contentProvidence Consortium being an obviousone here called c2p but I think it’sfair to say that we’ve got a lot more todo and and happily that’s what a taskshould given to our good friends at n toto ThinkThrough do you do you think they’recould be and this if you can answer thismaybe a personal question personal vieware there are there now or could wereach a point where um we actuallyshould not require disclosure for whatyou said if almost every piece ofcontent out there has touched AI in someway and it becomes a constant thing youknow are we going to need some thresholdand you talked earlier about this isgoing to be evolving this might beevolving as well um but uh so that Iguess is half the question second halfis um do you have thoughts on what isthe end we want to achieve throughdisclosure because maybe that’s how wecan getat when do we disclose it is really likewhat what benefit or what value does itadd to the end user to know that there’san AI elementhere yeah I I think it’s certainly thecase so in the voluntary commitments webuilt the threshold in so you don’t haveto disclose if someone uses an Instagramfilter um the disclosure kicks in whenyou’re creating new images from a wholecloth or radically editing images withsomething like Dolly 3 or whatnot sothat’s that’s where the disor kicks infrom a threshold perspective in thevoluntary commitments I wouldn’t want toprejudge where we land as a USGovernment on this but I think you couldprobably expect something similar forthat I mean even when you take a photoon an iPhone there’s so much AIprocessing that happens on the photoitself that it’ be Preposterous to sayum it is you know it is not uh AIenabled in some mechanism so I think wewe agree here that we got to get thetechnical details right I don’t knowexactly where we land in doing so butvery much we think we agree of thepremise here and the the goal ofdisclosure is that folks know what’sreal and what’s not and and if if imagesare meant to be hyper realistic and uhmeant to fool people then we wanted tobe disclosed that this is not actuallythe real thing and if images are are notor if it’s subtle alterations that don’tmislead people then I think we’re in aworld where we’re saying let’s not putup more meaningless dialogue boxes so wehave a long way to go in building thisEOS system technically we don’t have aton of time to do it but I think this isa case where it’s in the voluntarycommitments and in the EO and theimplementation framework as a toppriority yeah if there’s one thing wecan be confident is that criminals andfraudsters will be using EO EO AI toomany acronyms uh AI to their benefit asquickly as possible and theyunfortunately tend to innovaterelatively quickly yeah I think we’reseeing it already I think I mean I thinkwe’re seeing it already in a lot ofthese places and and there’s going tohave to be some kind of societaladjustment to it in the same way thatthe Photoshop era uhprompted a little bit more skepticismprobably of what we see and and the likeand I think that’s going to continue butI also think we have to build aninfrastructure around it that that isrobust to these changingTechnologies do you think there are anyare there any facets of the EO that youthink have been under reported are theresomething that you wish people weretalking about more the significance likeI you know just issues that you thinkmay have been pushed down either in theEO actually or or more broadly and in mymind I have postquantum crypto as as anissue that is still out there isenormous and we need to make sure we’reputting resourcesbehind yeah there’s a whole separateprocess in the White House onpostquantum crypto um Turin chabra isvery involved in that and like so a lotof good folks are are working on that Ithink probably one that I’m surprisedhasn’t gotten as much attention as maybeit should is the international Dimensionto this so we worked very hard againeven before the executive order on theinternational aspects of AI thatincluded um putting together early in2023 uh a group of 25 or so Nations thathelped us shape the voluntarycommitments that helped us shape the EOtext and the like we continue to meetwith those with those Nations andthey’re diverse in their geography theirsocioeconomic uh you know developmenttheir their perspectives and the El sothat’s been good we had the first everinter International Code of Conduct IIat the G7 uh which red out the day ofthe executive order that essentiallybuilds upon the voluntary commitmentsfrom our companies which we think wasvery important and then we’re startingto have conversations down in otherInternational fora so obviously the UNwill be a place of a lot of this workand and the like so I think theinternational piece a lot of which isdirected by the AI executive order and alot of which predated and informed theAI executive order is a really importantpart of of building a safety regime herean AI regime here where standards areinterruptable AI systems are are safesecure and trustworthy no matter wherethey’re trained and we have a wide groupof Nations working together on this andit’s worth saying for as much as we areeager to work with our allies andpartners we have a line in the AIexecutive order that we’re also willingto work on AI safety with ourcompetitors and this is something thatPresident Biden president she uhdiscussed when they met in San FranciscoinNovember that will be an interestingDynamic to watch develop um I guess lastquestion is I believe you are now thethe driving force behind the longestexecutive order um in in uh in inpresidential history I wondering if uhyou caught any Flack from any of the theum folks behind the previous record orare you having to look over yourshoulder because you’ve supplanted andand are you thinking anyone can everbeat your your 110Pages well all I can say is if you don’twant a long executive order don’t hire aprofessor to do it so you know what’sthe what’s the mark I didn’t have theenergy for a short short letter or shortexecutive order no I think we’ve triedto pair this down as much as we couldbut I think the length here is not afeature not a bug but it is a reflectionof the breath of the AI policy issuesand we have to to show up for civilrights for workers for consumers forprivacy for safety for the internationaldimension of this for government talentand the like and at some level if you’regoing to cover the Waterfront um onsomething like this it does require alot so we’ve had to really work hard onimplementation but we’re expecting tohit all the deadlines so far so I thinkwe’re off to a good start well that’sgreat news and I think a great place toand thank you for leading the effortthat did include all those key piecesbecause it sends a really importantmessage to the community that you haveeverything from National Security tocivil rights to all the other aspects umbuilt into it and so we’re movingforward on all them together um so thankyou again for joining us thank thanksfor your efforts um hopefully you canbuild up uh rebuild the sleep deficit oror fix the sleep deficit sometime soonbut appreciate all your workhere great well we are now going to turnuh to George Barnes it is my great honorto introduce George he is now thepresident of the Cyber practice at RedCell Partners U but you probably knowGeorge best from his 35 years of serviceat the National Security Agency from2017 to 2023 he was the deputy directorand civilian senior civilian leader ofthe a of the NSA as deputy director hewas the nsa’s chief operating officerand oversaw strategy executionestablished policy guiding operationsand manage the senior civilianleadership an exemplary record ofservice during some very interestingtimes George thank you for that andthank you so much for sharing some ofyour thoughts with us today about Ai andcyber security and and snow in DCgenerally so let me let me hand over toGeorge thanks again well thanks Jeff uhand good morning and and thanks reallyto you your team there at Aspen digitaland uh for all the adjustments given thesnow which is great because it’s ourfirst big snow for the year and alsothanks to Ben Buchanan for framing theexecutive order it was quite anaccomplishment there’s a lot for us totrack there but uh we’re well on our wayum and as Jeff noted given my years oftracking and defending against forignmalicious cyber actors while at the NSAI thought I would first set the contextfor the constant cyber security strugglewe’ve entered and S can’t seem tomaster the preponderance of our cybersecurity challenges stem from thepersistent and intensifying threat fromfour countries and the criminal elementsthey Harbor Andor directly enable youknow the list China Russia Iran NorthKorea we hear it over and over again thefact that the global Community lives onone giant network has naturally made theworld smaller and network enabledoperations have served as asymmetricForce multipliers to those who have feltweak relative to ourstrengths this Dynamic has alsosharpened the polarity betweenInternational law-abiding liberalDemocratic societies and aggravatedautocracies who have decided to ignoreor bypass our International laws andNorms let me just briefly run across thewave Toops of how these four countriesbracingly challenge us with increasingbigger resulting in unabated pressure onour cyber SEC security Community forbetter defenses China under Jing Panghas put more malicious cyber pressure onthe US and Western world than all otheractors combin combined their cyberoperations continue to hit every facetof oursociety enabling intellectual propertytheft identity theft political influenceoperations apparent Cyber attack toolprepositioning in our domestic criticalinfrastructure and broad-based Espionageagainst every corner of our governmentthey use our open society and indust andIndustry friendly laws to theiradvantage setting up networks serviceproviders and supporting Technologiesright here in our country yet theypreclude reciprocal access by usindustry except in highly controllededge cases such as Apple’s agreement toestablish a whole dependent independentinfrastructurethere Putin’s Russia has used cyber theCyber domain to extend and intensify itstraditional statecraft of attempting todestabilize Any Nation it deems a threatand to coers Nations it wants to controlwe’ve all watched this play out and itsongoing campaign against Ukraine butalso recall their past cyber attacks onGeorgia Estonia and yes Ukraine the gruFSB and SV all have highly sophisticatedcyber Services which have all woundtheir way into ourcountry Putin commissioned the nowdeceased prosan and his troll Farms tolaunch major democratic electioninterference campaigns against the USand Myriad European countries with theUS 2016 general election serving astheir Hallmark Endeavor and let’s let’snot forget that Russia proudly serves asGrand Central Station for the world’slargest concentration of ransomwareactors hitting our hospitals schoolssmall governments and Untold numbers ofcompanies turning to Iran most of theircyber operations are conducted againstIsrael but they continue to pursueretribution operations election meddlingand influence operations against ourinterests and then there’s North Koreawhich has somewhat mastered networkoperations aimed at Revenue generationeither by ransomware operations ordirect theft and extortion from Banksand cryptocurrencyOutlets none of these actor sets aresitting still they have honed theirtrade craft and increased their scopeand scale of their operational Focustheyve capitalized on earnedefficiencies and nominally permissiveenvironments with little to norepercussions accordingly they continueto increase investment in cyberoperations as main components of theirdefense intelligence and economicmachines moreover China fullyunderstands the force multiplying effectof advanced technologyand not been bashful in pursuit ofadvanced Computing Technologies despiteour efforts to stem the tide justyesterday Reuters reported on successfulefforts by China’s military andgovernment to acquire Nvidia gpus thatare currently banned for sale to Chinathe numbers uncovered were relativelysmall but actual numbers are likely tobe much larger and indicative of China’squest to compete all in all we knowChina and we know they push forrealization of their strategies bywhatever means necessarythis invariably correlates to actionswhich challenge the cyber securityposture of our industrial base acrossliterally everysector and as good as we’ve gottenacross industry and government inidentifying eradicating and defendingagainst these actors we remain woefullyvulnerable and are continuouslysufferinglosses so where are we on this journeysociety’s ever increasing dependence onubiquitous connectivity their internetof everything and dynamic Big Datadriven decision-making has created anunquenchable thirst and hence fuelforever increasingoptimization speed efficiency andincreasingly resiliency are mainingredients for most things in ourGlobal Society National Securitydecision making Dynamic defenseoperations Global Health incidentresponse natural D Disaster Response newproducts time to Market competing firstmover for first mover advantage andGlobal Supply Chain management the listgoes on the Need for Speed drives mostother variables and our Network enableddata driven lives seek to squeeze outinefficiencies that impactit this Quest has developed haspropelled The Accelerated development ofapp and application of Highly Advancedalgorithms and enabled by marveles inhigh performance Computing at a pacewhich has now prompted us to considerwhether we’re ready for the power wehave unUnleashed of course the fall of 2022 wasnot the dawn of artificial intelligenceor its application in cyber SecuritySolutions the Cyber SEC securityCommunity has been steadily increasingAI enabled analytics over the past 10years or so most of you will recall orwill have lived through the prior wavesof hype I recall the Advent of theincessantly applied AIML term thatstarted pervading the tech sector Buzzcircuit about seven years ago those fourletters seem to be strink sprinkled oneverything almost like saltpepper that said the advanced of AI andml enabled analytics and decision logicstarted to steadily enhance the efficacyof commercial and government cyberSecurity Solutions there so there wasSubs substance to all thathype all that said we were not alone inthe application of these Technologiesagainst cyber security challenges thescope pace and ever increasingsophistication of forign malicious cyberactions against our national interestforced NSA and our mission Partners tointensify our efforts to augment ouranalysts with technology to betteridentify mitigate eradicate attributeand defend against these threats inparallel us and Ani industry wasdeveloping ever increasinglysophisticated capabilities and we forgedpublic private Partnerships for enhanceddefense against the darkarts the good news is that we’re betterthan we’ve ever been the challenge isthat a new race is nowunderway now the development and Globalrelease of the various large langumodels supporting generative AI haschanged the world forever and whileUntold applications stand to enhance thefar reaches of our Global Society thosereaches unfortunately include the Realmsof the nefarious actors who will as theyalways have use the power of newtechnology to intensify theirBadness hence we can never turn backeffective cyber security will requirethe power of integrated AI while Badactors will continuously use it toenhance victim Discovery penetrationpersistence obus Maneuvers and malwareimplant features set enrichment andwhile generative AI will support Badactors and Defenders alike thediscipline of non-repudiation will takeon tremendous significance as llms willbe used to generate highly authenticfake content for malicious cyberoperators masquerading as trusted nmembers of a network and swapping indata or information to displace originalcontent so as we look forward into thisphase of globally accessible and eversmarter llms which are complementingMyriad other variants of AI algorithmmaturation we must understand andprepare for some basic realities andchallenges the sophistication of thethreat will intensify the speed ofthreat actors will dramatically increasehuman Defenders are already overwhelmedand require machine teammates just tostay at their current level of efficacymalicious operators will use theirmachine teammates to find the weakestlinks in victim networks moreeffectively entities who have struggledto defend themselves will beincreasingly vulnerable this setincludes private operators andmunicipalities operating criticalinfrastructure Services as well as manyorganizations that are ready prey forransomware attacks today’s panel and thediscussion of Aspen digitals new paperswill give us all points to ponder andconsiderations for bolstering ourdefenses through the application ofthese new technologies coupled withthoughtful contemplation of how ourGlobal Community should harness thepower of this technology for good thankyou for your time and sorry we couldn’tbe be person todaythanks excellent thank you so muchGeorge really appreciate the plug forsome of our recent work and more broadlythe excellent overview that you providedto us uh for context for this next panelum so next up I’m really excited towelcome a representatives from industryand Civil Society um and we’ll have aforward-looking discussion about wherewe go from here uh as we saw George andBen covered the evolution of AI over thepast few years as well as the recentWhite House EO Ben highlighted how thisTech Revolution is different from othersand that the government role was not asprominent um he also highlighted theneed for some private public discussionswhich we’ll touch on momentarily Georgeshared how these technologies have beenused by authoritarian regimes to subexcuse me to subvert democratic idealsum from ransomware to election medingextortion to Espionage and I’m reallyexcited to dig into what that futurelooks like um so thrilled to be joinedtoday by Bobby stemple of Dell Amandawalk of Google and go Chief Kumar ofomdar network each brings a reallyvaluable uh perspective to all of thisexcuse me and so I’ll invite each of youto introduce yourselves um andspecifically talk about your your yourrole and how it’s uh evolved over thepast year so Bobby I’ll I’ll turn itover to you first yeah thank you uh andappreciate uh the panel and and theconversation that we uh we uh intend tohave I think it’s a really importanttopic um I am a business unit securityofficer at Del Technologies in theproduct organization so uh think aboutme as the uh security strategic partnerum to the part of the company thatbuilds the things that are innovatingand transforming um across a a widerange of across the wide range ofsecurity uh disciplines from productsecurity to cyber security uh toinsiders uh Insider risk so it’s it’sinteresting to me when we talk about howthe last year has evolved itcertainly um has constantly felt like umthe speed uh button on everything we’redoing continues to ratchet up you knowevery time I feel like we’re runningreally fast in the security uh Communityum something new happens and that speedbutton um really uh gets ratcheted upand and I think for me that has meantthat we have to regularly stop take uhtake stock and look at where are themost logical uh points of interventionin order to really engage right this haslong been a problem of scale and uh andcapacity so I’m really looking forwardto the conversation with the colleaguestoday and I’m glad we were able to dothis even in this remote format exentthanks Bobby and you mentioned thosepoints of intervention certainly theirindustry perspectives on this we sawsome of the government perspectives Iwould love to turn it over to goind uhwho brings a civil Society perspectiveto introduce himself and also share howhis job has evolved over the pastyear thank you Katie thank you everyoneuh and thanks Aspen for hosting us umyes um I’m Goin shivkumar I’m a directorin the responsible technology team atmedar Network and medar network is aphilanthropic investment firm that seekstochange uh current social initiatives umand our life has significantly changedover the last year we’ve announced a newpoint of view to ensure that generativeAI is in the service of society uh We’veannounced a new 30 million initialinvestment fund and that’ll bridge thegap between industry civil society andpolicy makers broadening the Geninfrastructure so overall we feel thatuh to to Echo Bobby um I don’t thinkwe’ve seen uh the pace of movement be sosignificant and so fast uh over the pastone year I think we’ve seentechnological uh changes happen everyhour every day we have seen policyimplications and uh Reflections from allCorners so we hope uh 20124 sort ofcontinues not just the pace but alsomore depth in these conversations andprimarily implementation of all theidea absolutely thanks goind and AmandaI’ll I’ll turn it over to you I’massuming you’ve has experienced thispace over the past year oh yes it’s beenquite exciting um hi I’m Amanda Walker Ilead the applied research team inprivacy Safety and Security at Googleand the easiest way to describe what wedo is our job is to find new tools toadd to the toolbox for people working inthis area so we’ve been early adoptersand innovators in AI for a while atGoogle uh keeping up with some of theacceleration over the last couple yearswith llms and so on has has kept us onour toes so thank you for excellentinviting us to be part of theconversation here we’re thrilled to haveyou thanks Amanda all right so I know weonly have 40 minutes we have a ton ofground to cover uh so I want to inviteall of our panelists you can certainlyuh share your thoughts uh agree withother panelists disagree with otherpanelists or elaborate on anything thatyour panelists that your other panelistshave shared um it should be a greatdiscussion I’ll start with a few generalquestions um and then we’ll move intomore uh targeted questions as we go umBobby I’ll start this one with you andthen open it to the others um we justreleased a paper discussing the kind oftwo Futures if you will of AI both thegood place and the bad place um I knowyou had worked on that paper a bit andso I’m curious after being a part ofthat who do you think AI will Advantagemore Defenders orattackers yeah so that’s the perennialquestion um I think and with anytechnological uh Innovation we have weAct we ask ourselves this question andand I think um you know today I’mcertainly seeing the advantage to TheDefenders um more than the attackersthere’s lots of bad things out theredon’t get me wrong I am in no way sayingthat there aren’t advantages uh uh tothe uh to the bad place uh or to thedark side or whatever framing we want umbut but I think the real question is howdo weensure uh that thatsustains right that that we can takeadvantage of the speed in The Innovationuh for Defenders changing not just thedynamic of the engagement but of theenvironment themselves itself that’sbeing defended um and really have thepotential with this Innovation to changeall of the different phases um in theenvironment which makes it so differentthan many of the other innovations thatwe’ve had inautomation absolutely I reallyappreciate app a degree of optimism toget us started um Amanda do you sh itdoesn’t hold don’tworry I think I’m with Bobby on this oneI we’re seeing big advantages toDefendersum it’s uh you know it isn’t it is anarms race um there are alwaysInnovations on on both sides of the coinum but we’re really you know there’salways been an asymmetry betweenattackers and Defenders where Defendershave to see everything it’s called TheDefenders dilemma and we’re feing seeingsome real traction on that um with uhwith applying AI just to the scale ofwhat we need to keep track of whatsignals we need to find what we need todo with them so I would say so far we’rewe’re on the we’re in the defend helpsDefenders more than attackers and wecertainly hope thatcontinues absolutely Goen over to youwhat are your thoughts I I actually havea different perspective I think it’scurrently at least in the shorttermtechnology such as this I think it’s allsort of probably uring William Gibson’soften quoted phrase right the streetFinds Its Own use for Tech so I alsothink a lotof uh these attacks are not beingreported right as Defenders as peoplewho invest in civil society industry wehave to follow the rule of law we haveto uphold reputation we have reputationsto keep so we generally W are optimisticbut also tend to follow protocol andrules versus attackers are not bound byany of these constraints and often theseattacks are not reported if they aresmall enough right so small fishingclaims small ransomwares often peopleprobably just don’t report them but Ithink to sort of probably preempt yoursecond question I think in the short runany new tech such as this which issufficiently powerful and enablesactually lower skilled attackers to sortof punch above their weight will benefitthem but in the medium term we oftenfind both specific Solutions andsystemic Solutions through policythrough specific technologicalinterventions uh and it’s a stalemateand in the long run defense always winsright because at the end of the day thedefense has more support better Visionmore cohesiveness but the playgroundmoves so probably in two years we’ll beprobably talking about a new form oftech and then the cycle will begin againso that is my sort of overall reading ofthesituation thanks goind and Bobby curiousif you your optimism holds across alonger time scale I I I think for me thelonger time scale is where things gotwell it’s it’s impossible to have aclear view over a longer time scalegiven the um the world we live in butbut the piece that I see is thetransformation and Innovations outsideof the security domain that change theway adversaries act um that we need tounderstand inside the security domain sothat we can we can gain that that viewand and I find in many instances that’swhere we end up getting CAU off guard umand so I you know this technology andInnovations in general are changing thenature of everything we do um the natureof how we work and how we live and howwe interact and so understanding how allof those are going to co-evolve and whatthat’s going to mean for us as users asit decision makers as security Defendersis I think and it’s really an unclearpicture um going forward and so itreally means we’ve got to have agilityand resilience as individuals and ascommunities in order toadapt absolutely um Amanda I’d love togo over to you and see how the timescale maybe shifts some of this inparticular maybe pulling on that threataround incentives for attackers and howthat might change over time it it iscertainly Chang changing over time thises and flows and we’re seeing it e andflow a lot faster so you know one onesea change we’re seeing that you knowsome of us running large platforms haveseen early but is now I thinkpenetrating to the rest of the industryis it’s no longer a series of incidentsit’s a rolling wave you know people aretrying new things all the time some ofthem work some of them don’t uh and youreally need to stay on top of it so thepace has certainly changed even if ofthe the essential nature of thingshasn’t so um Ibut if we want to segue into your youryour second question but I I think thereare there are some really interestingpatterns here yeah absolutely um and Ithink too in addition to the Deluge ofnew attacks that we’ve seen we’ve alsoseen a ton of new um opinions a ton ofnew kind of headlines around this umsome of these things are pretty extremethey can go right to Doomsday some ofthem are more moderate um so curious forfor everyone what is something thatfolks are getting wrong or not quiteright around cyber security andAI Goen I’m happy to pick on you firstsure I think one of the things weprobably are I wouldn’t say gettingwrong but we need to change ourperception and our perspective is umwhat I call anthromorph ISM orprojecting human values on machines andnon-humans and getting inspired by allthe movies that we havewatched uh and getting influenced bythem and letting imagine take over andfill in the blanks rather than saying uhhere are the risks and here is what wecan do about them right just likeevery uh or here are the harms the otherthing that we don’t really quantify isrisks are often very theoretical uhwhereas harms are real right uh theyimpact people but also what we’regetting wrong is um we need to have anaffirmative Vision right any form oftechnology in the long run helps umcreate a more positive uhecosystem under which many differentopinions can Thrive and focusing on oneopinion and focusing only on whatadversaries can do with this technologywhich I know sort of is what I initiallystarted with but not how we can also usethis technology to create defensivemechanisms right if misinformation is aproblem we can also focus onwatermarking uh as a solution if uhadversary adversarial attacks do happenbut can we now automate large quantityquantities of data and do analysis likenever before uh as human beings uh nowcan be done by machines so I thinkthat’s what we are getting wrong in asense how we can use this Tech to reallypush back on some of the existingnarratives yeah absolutely I think it’sit’s hard to uh maintain that balance inthis conversation definitely um Bobbyhow about over to you any uh areas thatyou see commonly uh maybe not quiteright or getting wrong in headlines orother placesyeah it’s um it’s really easy to get umabsorbed in the height because U becausethe Innovations are happening so quicklyum and and I think two things get lostamongst all of this the first one is ithas always been and will continue to beabout the data um right and we’ve veryquickly very easily get focused oncontainer security or on on sort of thetechnical components which is inred Limportant but it has always been andwill always be about the data and andwhere that data is and what is intendedto do and so really understanding thatand continuing to stay focused on that Ithink is is really important fororganizations which means it’s uh stillvery important to focus on thefundamentals um in your environments umas well and again it’s really easy toget very focused in the hype um and andmove down a journey that leaves you tonot uh stay grounded in some of the morefundamental um issues that need to be uhneed to continue to be addressed um itwith with that though I think a commentI made earlier is still true and that isthat there’s an awful lot of innovationthat’s going to happen outside of the itenvironment that’s going to createincentives for adversaries um and thenwe’re going to see it change the attackpatterns um and so really we have tostain and understanding of what that’sgoing to look like as well so that wecan have a good sense of how our threatsare going tochange absolutely and Amanda over to yousure um I think that the thing thatpeople get wrong most is is going toextremes you know it’s it’s either theend of civilization as we know it or itwill completely transform civilcivilization into somethingunrecognizable um I think it’s somewherein between uh like the internetRevolution like what we used to call thecomputer reol ution if people rememberthat um it’ll change incentives it’llchange the economics uh some thingsit’ll magnify some capabilities thatused to be impractical or only if youhad huge amounts of resources you cannow do on a phone or gaming PC uh it’llamplify that you know on both the attackand defense sides so I think I’m I thinkwe’re all in Fairly strong agreementhere but it we do have touh avoid sort of avoid going into onerail or the otherum it’s new technology it’ll changethings in ways we don’t expect can’tanticipate humans don’t change as fastas technology and so how humans areusing it is going to be uh some ofthat’s going to be familiar you knowpeople are already using it for fraudand other things that they’ve usedprevious technology for so it’s not asnot as much a strange in new world as Ithink people are worried about but we doneed to stay on ourtoes yes absolutely so a theme I think Iheard throughout is finding that balanceand not going to either poll immediatelyuh which brings me to my next questionwhich I think is going to be a littletough to answer in any sort of long-termtime scale but we did hear earlier fromour government colleagues about um kindof the policymaking process and the needfor that private part private publicpartnership while regulation catches upand we’ve seen a ton of these voluntarycommitments and kind of self-governanceefforts across the private sector umfrom each of your perspectives whatpolicy actions would be helpful ingetting us on track towards a futurewhere Defenders have that Advantage umso Amanda I’m happy to go right back toyou I know uh Google was a signatory onon the voluntary commitmentsparticularly yeah I thinkthat some of the most valuable parts ofthat are agreeing on sets of principlesagreeing on a taxonomy of of ways to useAI uh as we discover novel attacks ordefenses how we share that informationhow we comparenotes um how to do adversarial testingresponsibly um we’ve been also champingthe idea of an AI red team sort of likewe’ve had security red teams for a longtime we we’ve got one internally that isstarting poking in product as bookingand proding as we’ve been releasingproducts that use AI we encourage thisit’s one of the best things that wethink you can do and I think CIS hasdone a great job of of S of leading theway here of saying yes all the samecyber things that you did for softwaredo for AI as well they’re stillimportant um and so setting that exampleI think and and promoting some of thoseguidelines has been has been reallyhelpful already and I’d love to seegovernment continue withthat thanks and yeah I think having thatcommon uh dictionary of terms and andstandards and everything goes quite along way even just to have the sameconversation uh across the table umBobby how about you any policy actionsthat you would like to see to get us ontrack yeah I I think that idea ofFrameworks and principles you knowreally getting agreement andconversation around those becomes animportant first step we we use words tomean fundamentally different things indifferent areas here and and that Ithink um impedes our ability to get theoutcomes that we’re we’re looking for soI I agree with Amanda in terms offocusing on taxonomy uh principles Ithink the open shared dialogue iscontinues to be incredibly importantthere are a multitude of players and andgiven where we are today and the paththat we don’t you know we don’t knowwhere the end will lead us to um makingsure that we have the players at thetable for the conversation reallybecomes an important mechanism and and Ithink finally keeping the end in mind umbecomes more important now than everbecause the mechanics are going toevolve rather rapidly and and we wantthem to evolve rather rapidly um in thein the um environment um that meanswe’ve got to really focus on what kindof outcomes we’re trying to drivetowards and provide an awful lot ofclarity around that and so I think thethe dialogue between Civil Societygovernment and industry has to really beframed in thoseterms excellent and that was a goodsegue over to goind um who I know workedon our Global paper a bit uh and takinga look at how we keep the end in mindacross these different regulatoryFrameworks we seeing so go anything toadd I would say I think between the EObetween the Aspen papers and between awhole host of AI governance mechanismsand reports already I think there’s alot of material out there right I thinkwe are at probably pki governance fromuh reports and you know reading a lot ofdocument standpoint uh but much of theinfrastructure actually is private rightso so there’s only so much policy makerscan do so it’s imperative to workclosely with industry and Civil Societysomething like I think what jcdc didwith software and you know havingindustry partners because whether we uhlike it or not I think industry and uhCivilSociety are at the front lines of thisdefensemechanism the second I think the EOalready lays down priorities thequestion is coordination and allocatingresources right it’s a question of moneyof formal law at some point and it’s aquestion of talent as Ben was alludingto earlier right um so I think there’s alot of documentation out there so thequestion is how do we Implement how dowe enforce how do we execute how do wecoordinate so the devil is in thedetails and I think the policy makersprobably have done a fantastic job ofresponding to the situation faster thaneverbefore absolutely well with that thatprovides a good sideway for me to turnback to our industry col colleages um soBobby across all of the differenttechnology revolutions we’ve seen in thepast 30ish plus years how is the open uhaccessibility of AI tools changingthings for the hardware and softwareindustry it’s a um I think the questionabout how it has is one and how it willis I think still um partially anunanswered question uh we’re certainlyseeing uh continued innovation at alllevels of hardware and software rightyou’re seeing great innov Innovations atthe chip level all the way up through uhtransformations in the PC uh space allthe way into um sort of into Data Centerum operations um we’re seeingInnovations in how software is actuallybuilt right it’s I sort of go back toit’s all about the data in the end thedata is the software the software is thedata we’re you know we’re we’re creatingthis world where we are no longer in adeterministic space and it is uh um itis no longer easy for us to say put afine line on what is Hardware what isfirware what is software um and what isthe data that we’re we’re processing onand and there’s real opportunity spacein that but the the with what at leastwe’re seeing is that the Innovations arehappening at all levels it’s not justwhat’s happening in the core uh you knowin in sort of large centralizedlocations um inferencing is moves at tothe edge and so what the power on yourum on your PC the power in your phonethe power in your your Edge devices isreally phenomenal um in this landscapeand and really what the other thing thisis forcing or continuing a journey forum for companies is to really understandhow we do some of the basics in thehardware space as well right how do wethink about power management andsustainability which you know it’s sortof a a a a tangential comment to many ofthe security related items but at somelevel you think about availability as acore security principle and what is thatgoing to look like um in a world wherewe have to have these this kind of powerin in a variety of places so there’sInnovation at all levels going on inhardware and software um in part becauseof thistransformation yeah it’s definitely a avery integrated ecosystem um and Amandain particular I think you had touched onuh some red teaming efforts would loveto broaden that question a bit and justhear a little bit about what Google isdoing um to try to harness AI to improvecyber security sure um one thing I’llnoce this this isn’t new um we’ve beenusing AI to protect Google systems andusers since about 2011 and we started byusing it to detect uh hackers oninternal networks and our red team isregularly poking and prodding and comingup with new techniques that we can testout against our own systems so it’sbecome an essential part of that toolset um we do think that AI really uhtilts the scales to give a decisiveadvant advantage to Defenders based onthat kind of experience um coping withcomplexity is the big one um one of theproblems with with security breaches ingeneral is that the digital main is justtoo too complex for humans it’s it’s toohard to see the whole picture especiallyif you’re one person and your PID arejust went off so you know uh you know AIcan help us cope that and rapidly reasonabout okay what just happened what do Ineed to know uh even when the data setsare very large and the systems are verycomplex um most recently we have beenusing llms for this um we are uh uhusing it to help bridge the talent Gapyou know so having novices early careerfolks uh and experts using the sametools learning from from that uh helpingspread the knowledge doing things liketaking logs and attack graphs and makingthem human understandableif you haven’t had a decade ofexperience looking at these kinds ofartifacts and so that’s a real Forcemultiplier that we are seeing um it alsohelps reduce toil um there’s because ofthe huge attack surface doing thingslike okay doing a doing an after actionreport on what just happened for yourmanagement in your execs or disclosureto a regulator having having AI helpwith that and summarize things to afirst draft we find this is especiallyimportant for people who are notnecessarily fluent in in the languagethat the company uses because we havepeople all over the world and so havinghaving AI assistance on a bunch of thishelps a lot um and then helping stop ityou know helping mitigate it um we’revery early days on this with things likeour security AI workbench but we thinkthere’s great potential for adapting toattacks as they happen bringing inhumans I don’t I think the purelyautomated defenses are are still a mythbut uh having smarter defenses having[Music]better better signals than oh my pagesjust just went off because somemonitoring console saw something go outof range is is a huge help and we seegreat potential there yeah definitely Ithink there are still many efficienciesto be discovered with this newtechnology overlaying them with existingtools certainly um Goen would love to goover to you and switch gears a littlebit um they’re much of the world despiteI think all of us being in the AI spaceis not focused AI in on AI day-to-day umand so Omid network does a ton of greatwork in supporting communities you’rethe director of responsible technologywithin that um are there things thatCivil Society groups should be doing nowto fortify themselves from these AIrelated cyber security risks even whenthey don’t have that expertise maybe inhouse or thatFocus yes I think sort of harking backto the conversation saying most of theinfrastructure is private right sonon-state actors are at the front linesof defense and I think one thing thingthat uh Civil Society actors should andcan do is moveBeyond chat Bots and there are manyother threats and then there are manyother tools available and there are manyother ways that AI or particularlygenerative AI can be helpful right uhparticularly to fortify themselves uhtaking advantage of Open Source cybertools and I know uh you know there arecertain risks that come with open sourcehyber tools but one is to not uh talk insilos traditionally what happens isindustry tends to talk amongstthemselves and tends to create productsand services which are far ahead andCivil Society tends to talkabout specific harms specific issues butthere should be an open dialogue withindustry to say how we can fortifyourselves what are the specific toolswhat are the micro courses we can takewhat are the tools where are theyavailable and how can we use them andthere are specific risks that CivilSociety organizations will face and youwill need specificsolution um uh so I think one is toparticipate in conversationswhere technologists industry leadersparticipate along with civil society andI think Aspen has done an excellent rolehistorically and we should find moresuch platforms and it’s also I thinkincumbent upon Amanda and Bobby toinclude Civil Society actors and in theconversations they are having ininforming and educ in them right I thinkoften we leave this role to philanthropyand government and there’s only so muchresources available there uh and thepace of technology that industry canmove at is unprecedented as we havewitnessed in the last fewmonths uh so I think rather thansuggesting specific fortifications Iwould say just open dialogues talk toIndustry and find what specificallyworks and look at re resources which areavailable often free as opensource absolutely uh well that was agood invitation to me for me to go rightback to Amanda and to Bobby um wouldlove to hear from you all about anyFocus that that your organizations haveon civil societies both your your actualcustomers and non-customers and if thereare any specific resources or tools thatare low or no cost that you might referthemto yeah um so one of the things thatthat is very much in the Del ethos isthe construct of um bringing othersalong and so go uh Go’s uh um call isreally important um because we have a arange of uh volunteer um activities orengagement functions that uh that helpum including um we have a team ofindividuals that we uh train that helpsthen go into schoolsum and makes a part of helping uh umunder uh under resourc schools umbringing their security uh functions upand so you know really this this is aquestion of how do we ensure that we arereinforcing the dialogue with civilsociety Partners um about what is thepolicy landscape and ensuring thatthey’re at the table and choosing thevenues such that they are uh they havethe players that we need at the tableand then um providing the uh the supportfor the individualact entities as they are facing thissametransformation right and Amanda over toyou I think that I think the biggestthing that thatuh we can do here is have theconversation out in the open uh not alot of back room discussion um betweenvendor you know between private sectorbetween public sector we’re all uh weall have different views that we thatthat combine we would like to see morethings like Frontier Model Forum wherewe have mult not just not just us buthave everybody from industry everyonefrom government everyone from CivilSociety looking at these developmentssaying okay what does AI safety meanwhat does you know ultimately governmentregulation is often motivated by pekeeping people safe from threats okaywhat does safety mean in the in the faceof this how do people’s expectations uhadapt as they get new tools you knowI’ve seen the sort of public attitudesabout things like chat GPT change over ayear you know it’s magical oh okay afterusing it for a year okay it’s not asmagical as itlooked and so tracking that and havingthat discussion and keeping up with okayCivil Society is grappling with thesethings just like industry and governmentuh what are the important things thatare emerging what are the things thatare receding into the background as okaythat looked like it would be importantbut it’s not now um I think keeping thatconversation going and identifying bestpractices where we can identifying sortof the concerns of of each sector ofsociety is is really important yeah Ithink it’s it’s really power powerful tomake sure that everyone has a chance tohave a seat at the table and absolutelyconversation great well I realized we’restarting to get low on time um Amanda Ihave one more question for you and thenwe’ll turn to a lightning round for allof our panelists uh Amanda I know youhave a background uh in in privacy andso very curious just to hear from youabout how you see AI impacting thefuture of privacy um at least as itstands today it’s a subject near anddear to my heart um know like anythingthat process processes or can processpersonal information um a does raisesome privacy questions andso some of the same principles thatwe’ve developed for data in general liketransparency Control Data minimizationall of that uh they aren’t new but theyshould be feeding into AI governance aswell um but how to do that is not alwaysclear so I think we need to work more onClarity on what privacy what theseprivacy concerns that have been uhdeveloped over the last however manyyears should and do apply to AI I thinkthere’s great potential to advanceprivacy goals with AI uh and I don’tthink that that’s been I don’t thinktalk about that enough um you know we’reusing it now internally to get a muchbetter understanding of privacy privacyrelated feedback that we get from usersor compliance issues things like thatyou know we have the Google Play Storeithas more data than I can describe aboutyou know users comments about productshow to take that and distill out and sayokay here are the top three privacyconcerns that the user Community acrossthe world has expressed about this andturn it into actionable feedback hasturned out to be a really powerful tooland would not be practical without AIso uh I mean I could monologue on thisfor a while but I think that like likethe security side I think AI amplifiesboth attacks and defense AI gives yousome traction against traditionalprivacy guarantees or privacy techniquesbut it also brings new ones and weshould be leaning into those absolutelymuch like that uh attacker and Defenderattention we discussed at the outset usetechnology yes all right great solightning round before we transition toour final session um goind I’ll startwith you um for this question um whatcan organizations do right now today tomake sure that they are not introducingnew major risks as they deploy AI toolsor interact with AItools yeah I think do we have formalregulation uh KY I think we probablyshould look at indirect means ofgovernance right so if you’re using openSource models make sure they’reresponsible which means better cleanerlicense data setuseum techniques which help uh upholdprivacy like aanda mentioned there aretechn technological interventions likeperhaps using synthetic databases as aform of preemptive governance and usingaudit mechanisms uh to ensure when youprocure and deploy these Technologiesyou are one wellinformed um and one I mean probably asas a as a framework perhaps one don’tget four more I think the technology ishere to stay but you can be careful haveproper governance mechanisms and stillwin second I think we should think aboutas an as as a child does be curious notafraid but at the same time think aboutdeploying these Technologies insandboxes small environments to test andstress test the results before you startdeploying and adopting and using apiswithout fully understanding what uh thesecond and third order effects are uh infor your tech for your firm for yourindividual use cases a simple asimple uh hygiene could be just don’tcopy paste stuff uh into chatbots whichcould then release sensitiveinformation uh into the world and whichyou can’t control right so some hygienicuh stuff should go a long way uhthat that’s what I would be back tobasics definitely uh Bobby over toyou yeah Back to Basics is uh I think isa really important uh framing for itit’s I think it’s interesting uh andimportant for us to separate what’s newabout this from what’s just the same umperhaps either made faster or um or sortof more broadly and not lose sight ofthe security guidelines and principlesand actions that you would take whetherthis was the technology or not so sogoin’s comment about not just copyingthings into chat Bots don’t just copystuff into Google search uh windows orother sort of other mechanism in thatspace um I think for me the other one isreallyrecognize and and focus on some of thoseother important things right what arethe what is the threat model that you’reoperating under how does this trans howdoes this change that threat modelthreat modeling is a A can be a complexuh activity but it can also be a reallysimple activity and uh and so you knowstart with the simple and and evolvethrough and I think that’s really animportant sort of an important Place uhthere and don’t get lost in the securityinnovation false choice you can do bothum it involves being thoughtful uh beingclear um and being skeptical um andthose are I think important principlesfor all users um in the world today notjust for the SecurityProfessionals absolutely I appreciatethe balance of being clear and skepticalthere for sure all right Amanda um I’llhand it over to you for the last wordbefore our next panel sure I think thatbalancing balancing speed and Care isgoing to be very important you knownobody wants to be left behind everyonewants to uh use use the latest coolstuff and and there are big advantagesto applying AI to problems that you havebeen able to get traction on beforebut we think it’s really important tokeep in mind that this does create somenew attack surfaces and concerns thatyour your current cyber security regimemay not uh may not match um things likeprompt attacks prompt injection datapoisoning it’s it’s really important forif you’re going to deploy AI to knowwhat yourdeploying what what was used to train itwhat uh what data using to performinference on and if you’re delegating adecision especially if you’re delegatinga sis to the output of a model know howit can gowrong there’s a there there’s a lure ofautomation but you know we’ve seen somethings in the Press about oh I had I hadan llm do a bunch of legal research forme it’s like well it produced somethingthat sure looks like it but those casesdon’t actually exist you know this kindof thing is something that you know ifyou’re deploying Ai and you’re relyingon the results you’re responsresponsible to make sure it’s it’sgiving you what you think it is so themixture of speed and Care is alwaysdifficult but it remains really reallycritical for deploying AI absolutelythat that oversight component I think isis definitely worth highlighting well uhthank you all very much for joining usAmanda goven and Bobby it’s been greatto be in conversation with you today umthrilled in these last few minutes ofour time together uh to hand it over tomy colleague Yin who will discuss ourrecent Aspen cyber products you meanover to you cool cool thanks Katie uh soyeah happy to be here um very excited tointroduce my two new uh next guests umso for those of you who don’t know meI’m Yamin Huck I’m the director of uscyber security here at Aspen digital andI’m just G to preface this discussionyou know we’re very excited to have thisdiscussion about two papers uh at theAspen cyber security pro uh program thatwe’ve recently launched so what wereally want to do is talk through youknow what these papers are all about butmore importantly what the big picture isand what things you can take away inyour own work uh both private public andCivil Society uh in terms of you knowusing AI tools within the concept ofcyber security so we have two papers andI’ll just give you a quick uh you knowoneliner about them uh one is a jointeffort by the US cyber security groupand Global cyber security group calledenvisioning cyber Futures with AI andbasically as organizations entrust moredata to systems enabled by AI we imaginetwo particular scenarios one where AIpowered tools Advantage Defenders andone where they do the same for attackersand what this paper does is it usesthose uh possible worlds to determinewhat kind of policy you know carrotsticks Etc we can use to get to thebetter place um the other paper we’ll bediscussing was created by the AspenGlobal cyber security group and it iscalled generative AI regulation andcyber security U basically gen toolsthey can make our lives worse they canmake them better and what we did wasanalyze existing regulatory effortsacross the world and where their prosand cons are at the intersection of geniand cyber so the ultimate product ofthis as well is guidance for governmentsfocused on liabilities safeguards andstandards so with that being said um I’dlove to introduce the two individualswho were critical uh in the formation ofthese papers we have with us todayJonathan walburn who is a seniorresearcher at the Rand Corporation andfaculty member at the party Irangraduate school his work includesoperations research computationaleconomics ICS decision and risk analysisand on systemic risk and market failureswe also have Jane Horvat who is apartner at Gibson Dunn and Crutcher aswell as the co-chair of the firm’sprivacy cyber security and dataInnovation practice prior to that shewas a Chief privacy officer at Applewhere she was overseeing Apple’scompliance with global privacy laws aswell as general issues relating to cybersecurity so Jane uh was activelyinvolved in the global cyber securitygroup for the paper on generative AIregulation and Jonathan was involved inthe US group with regards to theenvisioning cyber features with AI so uhthank you both for joiningus and so let’s talk up front so thevery first thing you know I’d love to dois give the audience a a concise summaryof each of these papers and by that Imean you know who is the audience forthis paper uh what would the bottom linebe and what key recommendations andactions do you think uh anyone readingthis paper should take away from it soI’d love to start with you Jane uh surewhat is the key takeaway of the globalpaper generative AI regulation and cybersecurity sure our paper focused ongenerative AI its regulation and impacton cyber security our group is comprisedof global experts with Decades ofexperience in cyber security and wewanted to offer some observations andadvice to governments as they grapplewith the implications of gen AI on cybersecurity in a world where securitybreaches can have far-reachingconsequences the Synergy between Ai andcyber security is not merely an optionit is an imperative like anytransformative technology Genai createsnew attack vectors e it even as itimproves defenses possible risk includecreating deep fakes for fraud and scamsautomated fishing and social engineeringimpersonating identities onlinegenerating malicious code and contentevading AI based detection systemshowever gen can also counter thesethreats by detecting generated contentand malicious malicious use asgovernments Dash to install legal andRegulatory safeguards organizationsshould adopt a multifaceted approachthat includes robust testing ongoingmonitoring threat modeling and ethicalconsiderations additionally combining AIwith human expertise in maintaining aproactive stance in cyber securitypractices remains crucial tosafeguarding digital assets and systemsfinally these Technologies only benefitcyber cyber Defenders if they’re adoptedit is critical that policy makersconsider procurement approaches thatenable the adoption of innovativesecurity Technologies past efforts haveshown that consistent approaches acrosslike-minded Nations provide thefoundation to successful governancedriving bettersecurity awesome thank you so much forthat and uh Jonathan would you like togive us an overview of envisioning cyberFutures with AI yeah absolutely thanks Juh so this paper is is jumping rightinto that conversation that’s beenfueled by the release of large languagemodels that really brought this wave ofattention with on on AI that kind offelt like a sea change with cyber um andand I think that we are um in generaltalking about you know all of thedifferent um uses of AI here rather thana specific type and acknowledging thefact that as as was discussed in thelast panel that cyber um has has used AIfor for for quite a while uh but at thehigh level this paper is speaking toorganizations uh that are deploying AItools to the policy Community to thecommunity of cyber Defenders I’m aresearcher I would say that there’s evensome pieces in in this uh paper thatkind of speak to that Community as welluh to to build upon the paper uhdefinitely speaks to the the challengesof AI and and acknowledges fact thatthey can and will be weaponized toincrease both the amplitude and thefrequency of malicious activity incyberspace in some places likedisinformation or Advanced uh targetingor fishing attacks uh we know thatthere’s some real tangible risks thatthe paper talks tobut it also speaks towards more of apositive side and I think this is kindof a unique and and cool feature of thispaper um of of some of the uses of AIthe the paper highlights ways in whichDefenders can actually gain regain thisadvantage over attackers by leveragingAI tools um basically to their to theiradvantage and here the the paperactually lays out severalrecommendations I think too many for meto remember um and and spout out for youall and I think you mean told me thekeep it the 90 seconds but the some ofthose include my favorite one avoidingthe hype uh that’s the firstrecommendation I love that because it’sas much about security as it is abouthaving a co coherent uh strategyorganizational strategy the they alsoinclude uh making sure that cyber and AIpractitioners are collaborating andavoid you know uh working in silos theyinclude making sure that cyber Defendersare prioritizing logs uh in order to uhAid in detecting the more sophisticatedAI enabled attacks they includetransparency around the use ofAI awesome thank you so much so you knowI want to bring the discussion now thatwe’ve laid out this Foundation to thetalk of first principles right what arethe kind of the key you knowhierarchical recommendations that we canuse to think about the relationshipbetween cyber and AI in one of thepapers we refer to these as quote thebumper sticker recommendations right andit could be things like beingtransparent in a thoughtful wayincorporating rules around the use of AIWhen developing contracts um there’salso in the generative AI paper wediscussed you know what works best incyber security and how we can apply thatto AI uh so you know Jane my uh firstquestion to you is you know what dothese first principles in AI uh mean toyou uh and what can we take over uh whatcan we look at that’s being done alreadyin the space of cyber security uh andapply those best practices within thecontext ofAI sure sure um well I think I think ifwe’re looking at and this is somethingwe have recommended in the paper um thatuh uh in order to generate EffectiveGovernment action you want to start within user in mind um the prevailingnarrative surrounding AI often centerson industry and governmental concernssuch as assessing its macroeconomicimpact on the labor market or thenecessity of regulations licensing andrisk Frameworks um these discussions areundoubtedly important but they canOverlook the immediate needs of and riskto IND individuals the reality is thatmost individuals lack the means toadequately protect themselves individualeducation both for users of gen and thegeneral public operates with a lag andstruggles to keep Pace with a breaketspeed of technological advancement socritically keeping the end end user inmind is probably the most important asgovernments are looking at this umanother area they need to look is assesscriminal and civil liability the easeand E efficiency that makes gen popularwith the general public applies equallyto those who would use it for nefariouspurposes current laws were writtenwithout consideration of gen and in manycases before it was even imagined at aminimum governments should reviewcurrent statutes to see if they needrevision to account for thesedevelopments and the legal disputes thatcould come with them uh third considertechnology safeguards and feasibilityany Regulatory and legal safeguardsproposed must be flexible to keep upwith te technological advances a few keyConcepts should be at the core of allsafeguards being considered over andabove data security again the end usersshould know when they’re engaging withan AI system discrimination and biasmust be minimized and eliminated ifpossible transparency and informationsharing concerning vulnerabilitiespotential dangers and inappropriate usesare critical human controlledbreakpoints must be in place when AI isutilized in critical systems such ashealth and safety of humans NationalSecurity or other critical matters andstandards need to beestablished um one of the cautions andthen I know we’re we’re running out oftime is consent fatigue we need to to bevery very cognizant and this issomething we’ve learned from past lawsparticularly privacy laws that if we putup too many consent boxes people getconfused they get frustrated sometimesthey just say yes without knowledge sothat is going to be one of the areaswe’re going to have to really grapplewith as we look at thisspace excellent uh Jonathan samequestion to you and I’d also love to tieit into you’ve done a lot of work aroundsupply chain risk and cyber security aswell so if you’re able to discuss alittle bit about you know where youthink uh AI tools can have an impactboth positive and negative in that spacetoo and what kind of recommendations wecan take from that yeah I mean so justto to to give a nod to the structure ofthe of this paper so we had uh thesekind of two scenarios it’s kind of acool read it was it was fun toparticipate also where we had the goodplace on one side where we’re talkingabout the the benefits of AI uh to AIDefenders um and and the good the badplace on the other side where yeahbenefiting more attackers and Defendersand one could read about this in moredetail in the report but you know we’rekind of laying out some of those um someof those benefits that uh Defenderswould have and and I think that you knowkind of agreeing with the last panel toothat there might be net benefit there uhand on on some of those points that umthe you know we kind of think that justtotally agreeing with all of Jam’spoints on on first principles and andbuilding on those that um you knowDefenders might actually have thislarger capacity when we’re actuallyusing um the the you know the eii toolsas well as um the you know havingadequate education The Defenders havethis larger capacity as far as both interms of Labor and as far as capital forfor training models um and and andactually you know sticking to the bestpractices as far as um response anddetection but I think that there’sactually improvements here where um thatthis paper talks to where we might havereal gains in response time throughimproved pattern recognition you knowreal gains um in in detection qualitythat might up the the overall defensesand you mean United to this this pointwell we recently had this report at IRon uh cyber security and supply chainrisk and in that discussion talk about alot of things I wish we had envisionedum the the AI wave when we started thatbut we didn’t um but you know some ofthe things that we were talking aboutthere were really that the supply chainis effectively a widening of the attacksurface uh for every organization onecould oversimplify that and say thatyou’re only as weak as your you’re onlyas strong as your weakest link in yoursupply chain uh and that’s both you knowtax uh the two supply chain participantsand the tax that propagate through yoursupply chain um in both a digital andphysical sense and some of theserecommendations we seeing the wherethere’s you know benefits towardsdefenders in terms of uh both you knowuh the these improvements in detectionand response these um theseamplifications of of cyber Defenders uhand you know Amanda made this pointearlier about um AI as a forcemultiplier as well just stronger codethroughout the ecosystem I think all ofthese are points where we could lead toto real gains that strengthen each ofthe nodes in the supply chain in waysthat you know we really hadn’t evenimagined when we’re writing that thatsupply chain paper awesome thank you somuch so this has been a great discussionJohn and Jane thank you so much for yourtime thank you very much and I’m kickingit back toJeff great thanks everyone for joiningus today thanks to our panelists andtheir time we really appreciate it agreat discussion um just final plug iffolks want to take a look at our ourpaper on the Aspen digital website ourtwo papers please do and also if you’reinterested just generally withbackground on AI and had to talk aboutit some of our Aspen digital colleaguesdid some great AI primers which you canalso find on the Aspen digital websiteum we will continue to work on thisissue Ai and other cyber security issuesBeyond it we’re trying not to lose focuson the rest of them um so stay tuned formore from the Aspen cyber program and onour website you can also sign up for anewsletter thanks again for ourpanelists for our staff for for pivotinghere and I hope everyone goes out andenjoys the snow day thank youcare
The narratives around AI and cybersecurity are the modern day Choose-Your-Own-Adventure: turn to page 10, and AI is solving many of today’s cyber problems and ushering in a new era of strong privacy protections and digital security. But turn to page 20, and AI is instead empowering attackers to overwhelm defenses and undermine the digital foundations of our national and economic security. As with most extremes, the truth will lie somewhere in the middle, but just where it lands will have major consequences for governments, companies, and individuals.
On January 16, cyber and AI experts shared where they believe we are headed, as well as what you and your organization can do to steer towards the right direction.
speakers
George Barnes Former Deputy Director, National Security Agency
Read about George Barnes
Mr. George C. Barnes served as the Deputy Director and senior civilian leader of the U.S. National Security Agency (NSA) from 2017 to 2023. As Deputy Director, Mr. Barnes acted as NSA’s chief operating officer, overseeing strategy execution, establishing policy, guiding operations, and managing the senior civilian leadership. As an agency deputy in the U.S. national security system, Mr. Barnes supported the U.S. defense and intelligence enterprise in the formulation of national security policies, and positioned NSA as an integrated mission partner enabling U.S. decision advantage and security against foreign threats.
Over his 35-year career at NSA, Mr. Barnes held numerous technical and organizational leadership roles spanning intelligence collection operations, target analysis, foreign liaison and industrial partnership management, workforce support, and global enterprise governance.
Mr. Barnes is a certified Cryptologic Engineer with a Bachelor of Science in Electrical Engineering from the University of Maryland. In 2020, he was honored as a Distinguished Alumni by University of Maryland’s College of Electrical and Computer Engineering. He is a recipient of the National Intelligence Medal of Achievement, a Meritorious Civilian Service Award, the NSA Ann Caracristi Award for Operations & Production Excellence, and multiple Meritorious Executive Presidential Rank Awards.
Ben Buchanan Special Advisor on Artificial Intelligence, The White House
Read about Ben Buchanan
Ben Buchanan is the White House Special Advisor on Artificial Intelligence. Previously, he served as a Director for Technology and National Security at the National Security Council. He is on leave from his professorship at Georgetown University, and is the author of three books on AI, cybersecurity, and national security.
Jane Horvath Partner, Gibson Dunn
Read about Jane Horvath
Jane has more than a decade of information privacy and cybersecurity experience. In January, 2023, Jane joined Gibson, Dunn & Crutcher as co-chair of the Privacy, Cybersecurity and Data Innovation practice. Until November 1, 2022 Jane was the Chief Privacy Officer at Apple. She had been with the company since September of 2011. At Apple she was responsible for overseeing Apple’s compliance with global privacy laws as well as working internally and externally on developing issues related to privacy and cybersecurity. Prior to Apple, Jane was Global Privacy Counsel at Google and, before that, served as the DOJ’s first Chief Privacy Counsel and Civil Liberties Officer. Prior to the DOJ, she also was Assistant General Counsel at AOL, where she helped draft the company’s first privacy policies. Jane holds a Bachelor of Science from the College of William and Mary and a Juris Doctorate from the University of Virginia.
Govind Shivkumar is the Director of Omidyar Network’s Responsible Technology program, focusing on governance and future-proofing technology for an open internet. His expertise includes Open Source Software, Technical Standards, Cyber Security, Digital Public Goods, Internet Governance, Web3, and Generative AI. He helped found Digital Public Goods and Open Tech practice. Govind is a founding funder and Executive Committee member at MOSIP, and a founding funder at CoDevelop and helped found the Open Source Policy Network at Atlantic Council. He also advises the Global Cyber Alliance. Govind is also a Non-Resident Fellow in the Technology and International Affairs program at Carnegie Endowment for International Peace. Prior to Omidyar Network, Govind spent a decade in Investing and Capital Markets at LGT Group, Unitus Capital, and Citigroup. He is a qualified Chartered Accountant and an alumnus of Haas School of Business at UC Berkeley and University of Mumbai.
Bobbie Stempfley Vice President & Business Unit Security Officer, Dell Technologies
Read about Bobbie Stempfley
Bobbie Stempfley is a vice president and business unit security officer at Dell Technologies and a leader in the field of security and the use of technology to support the public’s interests. In her 20+ years of public service at DOD, DHS, CMU, and now at Dell Technologies, she has focused on strategy and driving transformation in organizations allowing her to develop an understanding of the exquisite possibilities at the crossroads of strategy, policy and technology. She serves on the board of the Center for Internet Security and serves as a nonresident senior fellow for the Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab) at the Atlantic Council. Her passion is in increasing resilience through diversity and collaboration.
She has a B.S. in engineering mathematics from the University of Arizona and an M.S. in computer science from James Madison.
Amanda Walker Senior Director, Engineering – Security, Privacy, and Safety, Google
Read about Amanda Walker
Amanda is a well-known leader in the Security and Privacy Community, and was one of the early security and privacy pioneers at Google. Over the course of her 13+ year career at Google, she built some of the key systems, teams and initiatives in the Security and Privacy areas that we depend on today. Amanda began at Google in 2006, left in 2019 for Nuna, a health care startup where she was the VP of Engineering.
Amanda returned to Google in 2022 to lead and unify the Privacy & Security research teams (one of which she founded) into a single organization.
Amanda is based in Reston, Virginia.
Jonathan W. Welburn Senior Researcher, RAND Corporation
Read about Jonathan Welburn
Jonathan Welburn, PhD, is a senior researcher at the RAND Corporation and faculty member at the Pardee RAND Graduate School. His work leverages methods from operations research, computational economics, decision and risk analysis to elucidate emerging systemic risks and the potential for market failures. Along these themes, Welburn’s recent research has including efforts to model large interfirm networks, expose systemic cyber risks, identify potentially systemically important entities, and elucidate potential market mechanisms for enhancing cybersecurity. Welburn’s work at RAND has been sponsored by several federal agencies and published in RAND Reports, academic journals, and national news outlets including the Wall Street Journal, NY Times, and the LA Times.
Aspen Digital Moderators
Katie D’Hondt Brooks Director, Global Cybersecurity Policy, Aspen Digital
Read about Katie Brooks
Jonathan Welburn, PhD, is a researcher at the RAND Corporation and faculty member at the Pardee RAND Graduate School. His work leverages methods from operations research, computational economics, decision and risk analysis to elucidate emerging systemic risks and the potential for market failures. Along these themes, Welburn’s recent research has including efforts to model large interfirm networks, expose systemic cyber risks, identify potentially systemically important entities, and elucidate potential market mechanisms for enhancing cybersecurity. Welburn’s work at RAND has been sponsored by several federal agencies and published in RAND Reports, academic journals, and national news outlets including the Wall Street Journal, NY Times, and the LA Times.
Jeff Greene Senior Director, Cybersecurity Programs, Aspen Digital
Read about Jeff Greene
Jeff Greene is the Senior Director for Cybersecurity Programs at the Aspen Institute. Jeff joined Aspen in July of 2022 from the White House, where he served as the Chief for Cyber Response & Policy in the National Security Council’s Cyber Directorate. Jeff led the NSC’s defensive cyber and incident response efforts, and his team developed and drafted Executive Order 14028 (Improving the Nation’s Cybersecurity). Jeff also ran the White House counter-ransomware effort and oversaw the whole-of-government effort to harden the cybersecurity of U.S. critical infrastructure in advance of Russia’s further invasion of Ukraine.
Jeff previously served as Director of the National Cybersecurity Center of Excellence at the National Institute of Standards and Technology (NIST). Prior to joining NIST he was the Vice President of Global Government Affairs and Policy at Symantec, where he led a global team of policy experts. While at Symantec Jeff also served as an appointed member of NIST’s Information Security and Privacy Advisory Board and was a special government employee working on President Obama’s 2016 Commission on Enhancing National Cybersecurity. Before Symantec Jeff worked on both the House and Senate Homeland Security Committees, was Counsel to the Senate’s Special Investigation into Hurricane Katrina, and practiced law at a large Washington, D.C. firm.
Yameen Huq Director, US Cybersecurity Group, Aspen Digital
Read about Yameen Huq
Yameen Huq joined Aspen Digital in August 2023 and leads initiatives in addressing challenges within US Cybersecurity. Prior to joining Aspen Digital, Yameen worked as the Director of Data & Analytics at N2K Networks, Inc., a workforce development and media company known for the podcasts CyberWire Daily and HackingHumans. Under that role, he led a team that built strategic reports across multiple business functions as well as a workforce assessment service offering that provides training roadmaps to clients based on talent gaps in their workforce.
He also previously worked as a Manager in the Data & Analytics and Cybersecurity practices at Ernst & Young, LLP. where he led strategic cybersecurity assessments and analytics implementations at Fortune 500 companies across multiple industries. During this time, he also worked as a Deputy Data Director for the 2020 Presidential and Senate campaign in Arizona where he developed metrics and reporting to optimize voter outreach.
He holds an MS in Cybersecurity, a BS in Economics, and a BS in Chemical & Biomolecular Engineering; all from the Georgia Institute of Technology. He lives in Arlington, VA.
{"includes":[{"object":"taxonomy","value":"134"}],"excludes":[{"object":"page","value":"203374"},{"object":"type","value":"callout"},{"object":"type","value":"form"},{"object":"type","value":"page"},{"object":"type","value":"article"},{"object":"type","value":"company"},{"object":"type","value":"person"},{"object":"type","value":"press"},{"object":"type","value":"report"},{"object":"type","value":"workstream"}],"order":[],"meta":"","rules":[],"property":"","details":["title"],"title":"Browse More Events","description":"","columns":2,"total":4,"filters":[],"filtering":[],"abilities":[],"action":"swipe","buttons":[],"pagination":[],"search":"","className":"random","sorts":[]}