Connecting the dots between the cybersecurity challenges of today and the topics that matter to you.
The 9th annual Aspen Cyber Summit made its debut in Washington, DC, on September 18. Watch the recording.
Connecting the dots between the cybersecurity challenges of today and the topics that matter to you.
The 9th annual Aspen Cyber Summit made its debut in Washington, DC, on September 18. Watch the recording.
I first encountered generative artificial intelligence (AI) using ChatGPT about a year ago. I was impressed by the ability of ChatGPT to generate quick responses to my queries, especially with creative prompts. But then I started to wonder, who made this? I learned about OpenAI, their purpose, and rocky history. Since then other Big Tech companies have released their own generative AI systems. However, this has not gone without criticism by the public, especially given the new terms of service implemented by Google and Meta to train their AI models. Private companies are interpreting our consumer preferences and deciding what is best for us. Unfortunately, as exemplified by child safety and data privacy concerns, we have seen this approach fail before. In retrospect, when reading about ChatGPT and OpenAI, I didn’t think too critically about them or what alternatives might be possible. “That’s just the way it is.” That changed when I learned about public AI.
As we are in the early stages of increased integration of AI into our daily lives, we need to talk about how to ensure that AI is accountable and accessible to all of us. When I learned about public AI through my fellowship at Aspen Digital, I realized the implications of just a few tech companies owning this influential technology and the potential benefits of a public model. As a long-time advocate for strengthening our public health system in the U.S., I saw parallels between the features of public AI and a public health system.
Public resources and services, like public AI, that use pooled government funds to deliver services for all citizens are not a new idea. From public health systems to highway systems, there are many diverse examples of infrastructure that were deemed a societal necessity for further innovation and productivity. AI is no different. Just like a public health system, such as the National Health Service (NHS) in the U.K., a public AI system could be guided by national values and break down barriers to access. The National Health Service Constitution explicitly states its goals are to provide comprehensive services to all by providing the best value for taxpayers’ money while being accountable to the communities it serves (the public). The NHS’ goals reflect the values of the British people and government, emphasizing equity and accountability. Although public AI and public health systems offer different services, both provide user benefits based on foundations that reflect societal needs and values.
While different models and definitions of a public health system have been explored since the late 19th century, definitions of public AI are considerably newer. There are some existing concepts similar to public AI, but these approaches, such as sovereign AI or open source AI, do not necessarily ensure that important public interest goals are met. The features–and the definition–of public AI reflect public values. As Joshua Tan from Metagov explains, public AI systems are “publicly accessible, publicly funded, publicly provisioned, or governed by a public body” (which could be a government or another public interest organization).
With these features, public AI could resemble familiar public services like public health systems around the world. A publicly accessible AI model would allow everyone, not just government employees or highly-trained technologists, to use the system for their needs. A publicly funded AI model could be administered and maintained through government resources or other independent public entities. A publicly provisioned AI model would allow for accountability by the public through transparency and governance, to ensure their needs and concerns are addressed.
To be clear, private companies are not all “bad actors” but rather are driven by different incentives that don’t always align with the public interest. The public cannot afford to wait and see if these private AI systems are aligned with public incentives and values. Public AI, on the other hand, incentivizes reliable services to the public due to the accountability structures in place, where private actors might view public accountability as a lesser priority. Moreover, public AI isn’t a replacement for private AI. It can even be the reliable public infrastructure upon which private AI is built. Public AI presents a complement to government regulation policies on AI companies while promoting competition in the public’s interests. Public AI is important because it will diversify the field and ultimately lead to more innovation from both the private and public sectors.
Just like private AI options do not all look the same, public AI will not look identical across different contexts. For instance, if we compare Singapore’s South East Asian Languages In One Network or SEA-LION to New York State’s Empire AI, they are different by design despite serving similar goals of expanding access to AI. Empire AI is a public-private partnership between New York and universities within the state that aims to support AI research and small businesses. SEA-LION is similar to Empire AI in that the government plays a key role in resourcing the initiative, but it represents a coalition of governments and private entities that aim to build large language models to represent Southeast Asian languages. Empire AI focuses on making computing power available while SEA-LION is about making AI models work for an underrepresented language group. Due to the differences in the aims of SEA-LION and Empire AI, their ability to complement and compete with private AI will differ, but both of these efforts are forms of public AI.
Although the public AI approach may sometimes be viewed as an alternative to regulation, in reality it can complement a robust regulatory approach. In this respect, public AI can learn from over a century of experience we have in public health globally. Consider the U.S. and the Netherlands for example. Both countries make significant public health care investments (it might surprise some readers to know that 48% of health care spending in the U.S. comes from the government!), and both their health systems incorporate market dynamics and are based on values of consumer choice. Both systems are designed for beneficiaries to choose their own health insurance from companies that are mainly private insurers. However, the Dutch model pairs public investment with robust regulation. The Dutch health system is more regulated by both the government and independent public bodies, as demonstrated by their national annual cap for health care spending, which is decided by a group that includes the Ministry of Health, insurers, and providers. This results in the Dutch paying less than half of what Americans spend on health care while enjoying five more years of life expectancy at birth.
While a national spending cap does not exist in the U.S., the U.S. has begun to leverage its public investment negotiating power via Medicare, the federal health insurance program for people 65 and older. These examples show how it is possible to both value consumer choice and regulate a system that delivers essential services. Indeed, it is through investing in these public options that this kind of regulation is made possible. When designing public AI, like in public health, we should view the government as both a regulatory body and a significant investor and as a builder that has a responsibility to ensure public AI is publicly accessible, publicly accountable, and publicly provisioned.
My studies on health systems have taught me that incentives and values matter when designing systems that provide societal goods. Public AI is no different. As I conclude my studies in health policy this fall, learning about how nations can resource their health systems to improve the quality of health and uphold human rights, I’ll personally be monitoring the advancement of public AI efforts. I hope to push for accessibility and accountability, not just in public health but in tech as well, and maybe be a little more critical of the next shiny consumer tech thing that comes my way in the future.
Diego Burga is an MPH candidate in Health Policy at the Harvard T.H. Chan School of Public Health and currently works as a Google U.S. Public Policy Fellow at the Aspen Institute. During his time at Harvard, Diego supported a project for the Ministry of Health in Mexico. Previously, Diego worked at Partners In Health on their Global Policy & Partnerships Team. Diego also volunteers as a community organizer for Partners In Health Engage where he coaches and trains student leaders advocating for policies that strengthen health systems globally and in the U.S. Diego earned his B.A. in Biological Sciences from Cornell University.
We need a framework to guide the responsible development and use of new technology in the international economic arms race.
Our AI Elections Advisory Council published three checklists on the facts underlying specific AI risks this election year.