On August 14th, Aspen Digital, the Library of Congress, Metagov, and the Public AI Network hosted an invitation-only gathering to strengthen and spark new collaborations across the public sector, academia, and civil society to build a public AI ecosystem that benefits the American public.
Across the government, many efforts are already underway to put AI to work to positively impact the American public, from the AI Executive Order to the National AI Research Resource Pilot. Building on this momentum, this event brought together leaders from the public sector, academia, tech, and civil society to lay a foundation for viable public alternatives and complements to private AI systems.
The event opened with a keynote from Lawrence Lessig, Roy L. Furman Professor of Law and Leadership at Harvard Law School, on why this is the moment for public AI.
and now it is my great pleasure andhonor to introduce my friend Larry lekwho is Professor of law at Harvard afounder of Creative Commons and one ofour country’s foremost thinkers ontechnology and Democratic culture[Applause]so I’m sure that when Josh asked me tocome keynote this incredibly importantevent he hoped that I would talk aboutthis challenge of public AI ashard asimportant but as something we could havesome hopefor and I workedhard really hard to meet the objectivethat I imagined Josh gave me but I’mafraid I’m going to just have to putaside hope for amoment and instead get you to see how Isee just why this is so hard and soimportant because I agree withJosh I love theframing that we need to think about howto incentivize a new ecology tocomplement the private Ecology of AIthat’s going on right now but as an oldguy maybe the oldest in the room I don’tknow I can’t see well because I’m so oldum I think we need to stand back andlearn something from what has happenedso far in the evolution of Technologyinside of our society at least over thelast 30years web one Web Two web three I don’tknow which web you want to talk about atleast learn how not to repeat the samemistakthat we made in that other context if wecould at least not repeat the samemistakes that’s a reason to at leastwork hard on this important projectthat’s my bid that bid is very differentfrom Hope but it’s all I have to offeryou today okay so let’s start with how Iwant to think about or I want to talkabout what we’re going to callAI many think about I in this reallydramatic sci-fi way I want to talk aboutit and think about it in the most boringway possible not sci-fi way I want tothink about it as just a kind of tool acapacity apower and the first question we couldask is public ai’s capacity to make surethat the public has access to that toolto that capacity to that power becausethe public needs ai’s capacity capacityto do public things and we can alreadysee the extraordinary potential that AIhas to make the work of the public workbetter in my field in the law we can seethe extraordinary capacity to do lawmore efficiently more cheaply moreeffectively spreading the rule of lawwidely whereas right now the completelycorrupt and inefficient system of law wehave right now works well for a tinyslice of the most wealthy but for mostAmericans it barely works at all here’sa graph of the amount of time you needto wait to get your Social Securitybenefits preceding appeal processed ayear to get an appealprocessed but one year could be flippedinto a simple day less than a day plus asystem that subsidizedthe appeal for anybody who believed thattheir judgment was incorrectly given sothat the subsidized appeal plus superfast process would make everyone betteroff much better off than the existingsystem of analog processing the pointhere is there are millioncontexts where AI as a tool will enhancethe capacity of government to do itswork betterso is that a Reason for Hopehere well I take as a starting pointthis extraordinary book how many haveread this book okay great so all of usshould read this book but I take it as abaseline settingtext for the capacity of government toembrace and exploit the extraordinarypotential the technology that’s alreadyhere notAI theweb apps the capacity to do the thingsthat in the technology we already havein exploit makes possible simpletools but as her book demonstrates wehave in fact a kind of General failureto exploit even these simpletools because of insufficient investmentNo Doubt because of bad developmentstrategies nodoubt but I think most important becauseof a failure to seetechnology as extending asembodying the values or policies oflaw many of you are familiar with thisupdated version of the old picture thatI published in COD and other laws ofcyberspace many many you know hundredsof years ago this is the image peoplethink about but it’s this version that Ithink is is soimportant it’s not that these fourmodalities are all having a regulatoryeffect of course they are it is that thelaw can take the regulatory effect ofthese modalities and make it itsown it can make themodality a way of expressing the valueof the law and that’s the way to thinkabout how we evolve and transform eachof those modalities as a way to do lawbetternow it’s a complicated story of how youdo that how do you use Norms to advancethe policy of the law and I stronglyrecommend this book I’ve just comeacross recently um uh by demonCanta about the modification andimplementation of transformational normsto bring about the objectives orpolicies of a community or countrybetter how do you usemarkets to advance the ideals of lawbetter the teachings of Law andeconomics are filled with examples ofthe way to architect a craft law totrigger responses in the market thatAdvance the objectives of law better howdo you use code orarchitecture to advance the ideas of lawbetter and I would say here we need tothink about Fuko Robert Moses andJennifer’s workto understand the way in which the foundreality whether in Virtual space or inreal space helps us do the work of lawbetter the point is to understand thepotential and to think ofit as almost its autonomousForce but answering to the Democraticobjective of law indeed I fantasizesometimes about quitting my current jobstarting a new law school I asked Del todescribe it and it really got excitedabout code here but a law school thatfocused on this interaction between lawNorms market and code and thought abouthow a regulator who was sophisticatedabout the range of potential Regulatorswould think about the project of doinglaw better not thinking how do I triggerthe APA to get the right rul makingprocess alone that’s an important thingno doubt but often it’s not the mostefficient and effective way and we needan instinct anintuition about thattradeoff which we don’t quite haveyet but my point my non-h hopeful pointis that even this we have failed so farin long before we get toAi and we should take that as alesson and an urgent press for us tothink about how to tee up somethingbetterfor this next stage of Technologydevelopment that’s the first that’s 0one about public AI ai’s capacity toenhance public administration ispotentiallyhuge but realized so far it issmall okay point one point two there asecond sense of the second sense of theproblem of publicai ai affects thepublic right now it’s primarily privateactivity that has a public effect in thelanguage of Economics we could say ithas publicexternalities so it start simple unnon-controversial start in a certain wayit has environmentaleffects in the middle of global warmingwe are burning an extraordinary amountof energy to run these machines to buildour AIS Google used to promise they werecarbon neutral no longer are they makingthat promiseso we’re burning an extraordinary amountof carbon extra energy consumption toadvance this private objective of AI andforwhat I mean if we had decided to devotethat same amount of energy that sameamount of resources we wouldn’t have butif we had decided to do it for somethingreally pro-public how would the world bedifferent would we be better off I’m notsaying we would but I’m asking you torecognize we made a choice because theymadechoice and they funded their choice andwe don’t have sufficient systems inplace right now to force them tointernalize the externalities they arealreadyproducing but let me continue in a morecontroversial way talking about riskpeople refer to this as safety I thinkthat’s a totally stupid word to use inthis context safety it’s not safety it’srisk I want to think about it ascatastrophic risk I’m going to evencreate a word to describe it I’m goingto call it crisk CrisRIS is what we need to think aboutbecause crisk is anexternality you build machines that havethe potential for catastrophic harm thatis anexternality and what we know fromeconomics is that there is insufficientincentive inside of the privateCorporation to internalize thatexternality doesn’t make sense to do itand so they won’t why would they we sawthat in 2008 with the financial collapseWall Street ran up these extraordinaryAid driven uh financial instruments thatblew up the economy did they internalizethe risk of that no why because theythought Layman Brothers got caught onthis but they thought that if it blew upthe economy the government would come inand and bail them out so why internalizethat risk and suffer lower profits inthemeantime this is basic about privatecorporations operating in amarket they have insufficient incentiveto internalize which means there is arole for government there is a role forregulation indeed we even see the AIcompanies saying hey you got to regulateus we’re going to do a lot of scarystuff here you need to make sure youregulateus okay so there’s a pretty importanteffort probably being decided right nowtoday in California to regulate that SB1047 and of course when California putout 10B uh1047 their response was you know roughlythis my friend uh Bartdwit tweets big supporter of Open SourceAI that uh developers depend onfine-tune models this bill wouldeffectively prohibit he also posted theuh private photo that we took togetherwhere I was wearing the make AI openagain hat which um I’m proud to say I’mall for making AI open again but ofcourse the models that are regulated bysb147 are extremely large models I’mreally sure Bart is doing important workbut I doubt that he isfine-tuning um with power equal to orgreater than 3 * 10 to the 25th integeror floating Point operations it’s a tinyset that 1047 right now is aiming toregulate and it empowers the new Zarthat would be created under this bill tothen reset that limit going forwardbased on what is the crisk line or andAndre Ying says 1047 is a long complexBill oh my God it’s 22 pageslong it might be the shortest it mightbe the simplest of all the AI bills thatare out there right now it is socompletely not a long complex Bill andthen people say well we got regulatorycapture here okay you mean byproprietary AI against open source Aiand I put that in quotes because anylicense that says you will not use theLlama materials or any outputs orresults of the Llama materials toimprove any other large language modelin including meta of course that is notan open source license period so it’s aquote open source license fine but is itthat we’ve got proprietary going againstopen source and that’s what this statuteis no not really because the statute1047 expresslyexcludes open- Source models once theopen source developer no longer hascontrol over them so it puts thedeveloper in the same position as theproprietary model OKAY regory capture bybigAI well again not really because theseguys are the opponents pushing as hardas they can to stop the bill today inCalifornia or to get the governor toveto it instead the regulatory capturethat’s alleged here is by this cabal ofdangerous Souls known as the effectivealtruists these guys basically Sambankman freed who from his jail cell isalleged to be manipulating theCalifornia legislature to get them topass this bill to protect us from thisand this they say is the regulatorycapture of course here’s a perfectexample where the mean says it betterthan any argument could then we createda conspiracy theory that tiny sa AIsafety nonprofits are the ones doing thelobbying to assure this bill passes okaymy point is this is not perfect no billis perfect the problem is not the billthe problem is the proprocess it is a corruption corruptedcapture process for sure but it’s notbeing captured by tiny AI safetynonprofits and then the bigger point isokay even with this criticalrisk We are failing to be able to stepup to do anything to internalize thesepotential externalities toovercome the bias against regulationwhich screams from the top of its lungseverywhere West uh of ofNevada okay the second Point thirdpoint so what should we be doing thenaffirmatively to support the project ofpublicAI as articulated beautifully in thedocument I think all of us got before wecame identifying ideals andstrategies going forward I would sayfirst point to recognize is just howshort timeis to begin to develop and articulatethisVision I was thinking that the last bookI wrote about the internet orintellectualproperty I published actually before thebusiness model of the internet was clearthe business model the business model ofEngagement of using AI to optimize tomaximizeengagement by the users of theinternet once that became the businessmodel once thathappened there was no resisting whatthese companies would do because it wassuch an extraordinarily profitablebusiness model they were going to bendas much as they could towards theendof maximizing profit under that businessmodel the honor to represent Francishgan when she became the Facebookwhistleblower and if you go through theFacebook files there any number ofexamples of great Engineers inside ofFacebook bringing tomanagement strategies for minimizing theharm of thatplatform but every time those strategieswere a conflict between engagement andsafety engagement one every singletime adop a whatever it takes tomaximizeengagement which isexplained by the business model or byone of the the greatest comedians weused to colonize the land that was thething you could expand into and that’swhere money was to be made we colonizedthe entire Earth there was no otherplace for the businesses in capitalismto expand into and that they realizedhuman attention that we can now they arenow trying to colonize every minute ofyour life that is what these people aretrying to do every single free momentyou have is a moment you could belooking at your phone and they could begathering information to Target ads atyou that that’s what’s happening so likeas as much as we can you know as havereally good conversations and try tohumanize uh the conversations the likemechanism of the business is is rollingtowards that just because of the marketso like it’s coming it’s coming winteris coming whatever ittakes and then whatever it takes getsJustified this is one of the mostextraordinary memos I think I’ve readfrom Facebook Andrew Bosworth who’s oneof the most senior Executives atFacebook just before 2020 wrote thismemo to the whole staff discussing theupcoming election because of course manywere very fearful that it was going torepeat what happened in 2016 where manybelieved Facebook played a critical rolein electing Donald Trump and in thismemo at the end he talks about this booksalt sugar fat by Michael moss andbasically the story of salt sugar fat ishow these processed food companiesincreasingly recognize their product washarming their customers it was poisonousto their customers and some of thesecompanies like craft began to haveExecutives who said let’s stop makingpoisonous food and instead make foodthat’s healthy for our C so we’ll createthis kind of processed food but healthyprocessed food so they did that and ofcourse nobody liked it so the craftbusiness collapsed around these foodsand very quickly those Executives werethrown out and old Executives came backand returned to the old ways ofproducing poisonous processed foodBosworth points to that storyto respond to the suggestion that maybeFacebook should be doing something tomake sure that it’s not spreadingpoisonous information inside of theinformation ecosystem at least in thecontext of a critical national electionand this is what hesays what I expect people will find isthat the algorithms are primarilyexposing the desires of humanity itselffor better or worse this is a sugar saltfat problem the book of that name tellsa story ostensibly about food but inreality about the limited effect effectin this of quote corporatepaternalism it’s paternalism to not hurtpeople that’s what that says a while agocraft food had a leader who tried toreduce the sugar they sold in theinterest of consumer health butcustomers wanted sugar so instead theyjust ended up reducing craft marketshare Health outcomes didn’t improve theCEO lost his job the new CEO introducedquadruple Stuffed Oreos and the companyReturn to Grace giving people tools tomake their own decisions is good buttrying to force decisions upon themrarely works for them and he said foryou by which he meant forFacebook so here it was we can’t doanything we just give the public whatthe want public wants and if that’s whatthe public wants and it’s harmful tothem it’s not our problem this is notsurprising corporations areAIS they are analogAIS they come with an objective functionbuilt in the objective function is tomake money and they are instrumentallyrational towards that objective functionwhatever ittakes now notice that we spend so muchtime arguing about are the AI modelsaligned with society’s values are theyaligned these digitalAIS are they aligned to make sure thatthey’re not going to do harmful thingsto society they’ll be honest and they’llcare about the public why don’t we askthat about analog AISis exxon’s interest aligned with thepublic is meta GoldmanSachs these digital AIS are going to bethe tools of the analog AIS that’s whatthey will use to achieve theirobjectivesbetter and those objectives of coursehave no necessary Connection in anunregulated space to the good of societythe point here is that key is thatnow this is the moment for public AIbefore the business model of AI becomesclear they don’t know how they’re goingto make money many people think open awill shut down in a year because they’vegot such a huge burn rate and they haveno clear way to makemoney and in thinking about how werespond in this context we have to thinkenvironmentally that’s the objectivethat’s the way Josh framed public Ai andI think it’s right before it is too lateand so here are some ofiapoints about the environment withinwhich public AI becomespossible the simplest and most obviousis we need massive new funding tosupport thiswork now of course you could just saythere’s no reason to go on from thatbecause Prospect of that in this city isvery low but that’s objectively whatthis needs massive funding to experimentin all sorts of ways ways not just inways that might drive a particularbusiness interest of a particularbusiness model for proprietaryAi and we need opensource AI everywhere there’s an asteriskthere I’m going to come back to it in asecond but this is a criticalenvironmental context in this sense myfriend Bart is completely right if wecommoditize AI through open sourcestartups innovators and others caninnovate faster more affordably leadingto Greater success transparent AI modelsbuild trust resulting in higher adoptionrates absolutelyright we need a strategy of open datafor all which means Banning exclusivecontracts that take huge chunks ofpublic data and make it exclusivelyavailable to one or two privateproprietary firms and we should beleveraging the data that we have accessto in private entities whether it’slibraries or uh uh Brewster kale toassurethat data will be universally accessiblein this way and we need to be investingin good AI not just for us not just forhumans but also for them for theAI my favorite part of one of myfavorite books here uh mad’s scary smartis when he says think aboutSuperman a critical part of the story ofSuperman is that he lands in Kansas andhe’s raised by parents who convince himhis purpose in life is to make the worlda better place and mo says What Ifinstead he landed in New York and hisparents were hedge fund managers andthey said your purpose in life is tomake as much money as you can to stealas much as you can to stop anybody whowants to stopyou what would Superman have beenthen and then he says just think abouthow we are training AIS today wetraining them to sell to gamble tomanipulate the masses to maximizeprofits spying killing to defend theprocess they make we are training themin the anti- Superman way and instead weought to be training them and spreadingthe practice of them in this safe goodAI way and then the final point thepoint I really want you to get becausethis is the point I don’t think peopleare thinking enough about but I think iscritical is we have to think about thetechnical environmentfor safeAI so um the asteris that I put aroundopen sourceAI is an asteris that recognizes what Ithink of as a legitimatepoint that at some capacity at somelevel some power open source AI triggersconcerns risk concerns and this is thestandard shtickagainst open sourceAI that the loss of controlrisks are huge AGI or even biggerdoesn’t announce itself doesn’t say heyI’m now AGI it’s not that stupid rightit’ll sneak up on us and when it appearsop you know here’s the GPL licenseattached toit then that terrifiespeople and the fear with respect to opSource AI is that once it’s already outthere how you going to get it back atleast with a proprietary model theproprietary companies say if we discoverthis we can shut it down and we canpolice the apis or whatever and thenthis is an argument for proprietaryclosed development only I’m against thatargument I’m for opensource but please see thispoint therisk is a function of thearchitecture within which the models aredeployed and if we had a architecturethat had in effect a kind of governanceon thechip even a minimal governance of likethe abilityto flip the switch and turn a circuitbreaker off so it can’t continue to runif we had even that minimal architecturethe risk would beminimized if that were effectivereliable and trustworthy this governanceon thechip it would erase the argument againstopen source because you could say yeahthe open source could go out there itcould turn into some kind of monster doall sorts of things but if we had theability to make sure that theinfrastructure it was running on couldresist its badplay that would be less of an argumentto favor proprietary over open source ifeffective reliable and trustworthy I getthat’s a bigif but if that were truethen the most important components ofopen source of uh uh public AI could besupported public AI in an environmentthat supports and pushes the capacityfor development that spreads andbenefits the public generally what stopsthis hopeful picture that I’ve suggestedunder the covers of course not openlybeing hopeful that’s that’s way above mypay grade under the covers thishopefulness dependson a government thatworks I asked Dolly to give me a pictureof our government over there this iswhat goly gave me that’s not quite rightanyway that’sunfair because the point is not thatthey’re clowns the point is that theyarerational given the structure ofincentives that surround them and giventheir objectivefunction they’rerational in spending 30 to 70% of theirtime raising money from the tiniestfraction of the 1%including a large proportion of thattiny fraction of the 1% living inSilicon Valley they arerational given the system we have letthem live within so longas at a minimum their funding is notpublic the policies they Advance won’tsupport publicAI kid Aaron Schwarz 18 years ago shamedme into giving up my work on technologyand copyright he’d be really angry tosee me speakinghere to take up the fight to end thiscorruption and I got to say I got a lotmore work to do but you should help thatonetoo an ecology for opensource AI an ecology for public AIdependson a government with the capacity to dowhat actually makessense not what raises campaign dollarsthank you very much[Applause]
Speakers
While much of the programming was discussion-based and interactive, remarks from the following speakers punctuated the event.
Opening Keynote: Why this is the moment for public AI
Lawrence Lessig, Professor of Law at Harvard University
Lightning Talks
Julia Lane, Professor at the NYU Wagner Graduate School of Public Service
Beth Noveck, Professor at Northeastern University and Director of The GovLab
Carolyn Dee, Assistant Secretary for Technology at the New York Governor’s Office
Robert Underwood, Postdoctoral Appointee at Argonne National Laboratory
AI in the Agencies: Expert panel
Katerina Antypas, Director of the Office of Advanced Cyberinfrastructure at the National Science Foundation
Travis Hoppe, Assistant Director of AI Research and Development at the White House Office of Science and Technology Policy
Victoria Houed, Director of AI Policy and Strategy at the U.S. Department of Commerce
Zach Whitman, Chief Data Scientist and Chief AI Officer at the U.S. General Services Administration
Moderated by B Cavello, Director of Emerging Technologies at Aspen Digital
{"includes":[{"object":"taxonomy","value":"131"}],"excludes":[{"object":"page","value":"204162"},{"object":"type","value":"callout"},{"object":"type","value":"form"},{"object":"type","value":"page"},{"object":"type","value":"article"},{"object":"type","value":"company"},{"object":"type","value":"person"},{"object":"type","value":"press"},{"object":"type","value":"report"},{"object":"type","value":"workstream"}],"order":[],"meta":"","rules":[],"property":"","details":["title"],"title":"Browse More Events","description":"","columns":2,"total":4,"filters":[],"filtering":[],"abilities":[],"action":"swipe","buttons":[],"pagination":[],"search":"","className":"random","sorts":[]}