Connecting the dots between the cybersecurity challenges of today and the topics that matter to you.
The 9th annual Aspen Cyber Summit made its debut in Washington, DC, on September 18. Watch the recording.
Connecting the dots between the cybersecurity challenges of today and the topics that matter to you.
The 9th annual Aspen Cyber Summit made its debut in Washington, DC, on September 18. Watch the recording.
It wasn’t reporting on AI that first got me into artificial intelligence. I got interested in AI the way a lot of us do – watching movies and TV when I was a kid. I loved watching Data on Star Trek and C-3PO in Star Wars. I loved the first two Terminator movies – especially the 2nd one where (spoiler alert!) Arnold Schwarzanegger is on our side. I was probably six or seven when my mom read me the Isaac Asimov Foundation series.
To tell the truth, these stories are a big part of why I studied Applied Mathematics at UCLA, why I am getting a Masters in Public Policy at Georgetown, and why I’ve spent so much time studying technology policy. I care about what the future looks like, and I want to make sure we do a good job building it. That said, while taking inspiration from fiction about what the future could look like (think Star Trek, not the Terminator) is great, it can be dangerous when we let those fantasies creep into the way we think about the real challenges and opportunities we may face using AI.
Chat GPT isn’t conscious, like Data or C-3PO. And AI isn’t a government plot gone wrong, like the Terminator. Language matters. When people talk or write inaccurately about AI, it feeds an incorrect public view that can lead to unnecessary fear, bad policy, and an inaccurate perception of what risks we should prioritize.
Since September, I’ve worked as a Google Public Policy Fellow at Aspen Digital. One thing we’ve emphasized in my time here is shifting the public conversation on AI to better reflect what AI actually is and what it means for people. In pursuit of this effort, Aspen Digital published three introductory primers on artificial intelligence to help journalists reporting on AI, detailing what it is, how it works, and who creates it.
Today, we’re announcing the 2023 winners of our AI Reporting Hall of Fame, highlighting the best writing about AI tools in action from 2023.
There are a lot of common mistakes in writing about AI. Sometimes, authors use marketing terms that reflect corporate talking points more than reality. Other times, “AI” is written about as if it’s all just one thing, when in fact it can mean anything from large language models to facial recognition technology to online shopping recommendation algorithms to self-driving cars.
But perhaps the most common mistake is personifying or anthropomorphizing artificial intelligence. When reporting on AI, many writers use verbs like “knows” or “understands,” which are not things that these tools are actually doing. I’ll admit it: I was initially a little skeptical that personification was a real problem. If viewing AI as human can sometimes be helpful for understanding it, then what’s the harm in using personification to explain an AI tool in an article, even if it’s not totally accurate? Is it so bad to lean into these metaphors?
The problem is that viewing AI as human is not always helpful for understanding what’s going on. Even setting aside any ethical questions about what it means to be human, personifying language can lead to unrealistic expectations about what an AI tool can do and how accurate we can assume it to be. Already, we’ve seen cases where assuming AI is like a human could cause harm, such as stories about chatbots declaring their love for their users.
What’s more, personifying AI can lead us to ignore the real people who design, test, and deploy these tools, and deserve the credit (or the blame when things go wrong). Talking about AI as an agent rather than as a tool obscures this reality. When an insurance company uses an AI tool to improperly decide that certain people shouldn’t get coverage or when police ignore contradictory evidence after using a facial recognition tool, personifying the artificial intelligence only helps them hide the fact that it’s them making the decision. This isn’t Star Trek or the Terminator – it’s just people, using new technology as an excuse for unpopular decisions. Accurate language makes this fact clear.
Marissa Gerchick, Tobi Jegede, Tarak Shah, Ana Gutiérrez, Sophie Beiers, Noam Shemtov, Kath Xu, Anjana Samant, and Aaron Horowitz | ACLU (March 14)
Viola Zhou | Rest of World (April 11)
Isaiah Smith | The Famuan (April 21)
Michael Atleson | FTC Business Blog (May 1)
Julie George | Bulletin of the Atomic Scientists (May 16)
Molly Sharlach | Princeton University Electrical and Computer Engineering (May 30)
shirin anlen & Raquel Vazquez Llorente | WITNESS (June 28)
Mack DeGeurin | Gizmodo (August 15)
Aaron Sankin & Surya Mattu | The Markup (October 2)
Haleluya Hadero | The Associated Press (November 2)
Reed Albergotti | Semafor (November 13)
Angela Spivey | Duke University School of Medicine (November 13)
Susanna Vogel | Healthcare Dive (December 13)
Jeremy Wagstaff | F&D Magazine (December)
Anna Waterman & Stephanee McCadney | Quill (n.d.)
And while I appreciate the value of more poetic descriptions of the world, my education has taught me the value of accurate language, too. In math, precise definitions are the building blocks for the entire field. In public policy, vague phrasing can create a loophole in a law or make it easier for someone in power to avoid accountability. Accurate language is equally important in journalism. Certainly, analogies and metaphors can be powerful tools with the right context. But when it comes to educating people about the pressing issues of our time, we need to meet a higher standard.
As I complete my final months of grad school, I plan to continue my work on artificial intelligence, election policy, and campaign finance reform. As I deepen my expertise in these areas, I’m preparing for a career of working to bring about the change we need to see in policy and technology. I’ll do my part to make sure our future is more like Star Trek and less like the Terminator – while remembering not to take science fiction too seriously.
Tom Latkowski is an MPP candidate at Georgetown’s McCourt School of Public Policy who currently works as a Google U.S. Public Policy Fellow at the Aspen Institute. He has previously interned at the White House Domestic Policy Council and the Office of Senator Dianne Feinstein. Tom has previously worked on campaign finance reform, including writing a book on democracy vouchers, and co-founding an organization to advocate for campaign finance reform in Los Angeles. Tom attended UCLA for his bachelor’s degree, where he studied Political Science and Applied Mathematics.
As a long-time advocate for strengthening our public health system in the US, I see parallels between public AI and the public health system.
We need a framework to guide the responsible development and use of new technology in the international economic arms race.