Aspen Digital

We Need Better Reporting on A.I.

Why the Way We Talk About Artificial Intelligence Matters

Abstract pattern of half-tone pedestals for the 2023 Reporting on AI Hall of Fame winners.
March 26, 2024

It wasn’t reporting on AI that first got me into artificial intelligence. I got interested in AI the way a lot of us do – watching movies and TV when I was a kid. I loved watching Data on Star Trek and C-3PO in Star Wars. I loved the first two Terminator movies – especially the 2nd one where (spoiler alert!) Arnold Schwarzanegger is on our side. I was probably six or seven when my mom read me the Isaac Asimov Foundation series.

To tell the truth, these stories are a big part of why I studied Applied Mathematics at UCLA, why I am getting a Masters in Public Policy at Georgetown, and why I’ve spent so much time studying technology policy. I care about what the future looks like, and I want to make sure we do a good job building it. That said, while taking inspiration from fiction about what the future could look like (think Star Trek, not the Terminator) is great, it can be dangerous when we let those fantasies creep into the way we think about the real challenges and opportunities we may face using AI.

Chat GPT isn’t conscious, like Data or C-3PO. And AI isn’t a government plot gone wrong, like the Terminator. Language matters. When people talk or write inaccurately about AI, it feeds an incorrect public view that can lead to unnecessary fear, bad policy, and an inaccurate perception of what risks we should prioritize.

Since September, I’ve worked as a Google Public Policy Fellow at Aspen Digital. One thing we’ve emphasized in my time here is shifting the public conversation on AI to better reflect what AI actually is and what it means for people. In pursuit of this effort, Aspen Digital published three introductory primers on artificial intelligence to help journalists reporting on AI, detailing what it is, how it works, and who creates it.

Today, we’re announcing the 2023 winners of our AI Reporting Hall of Fame, highlighting the best writing about AI tools in action from 2023.

There are a lot of common mistakes in writing about AI. Sometimes, authors use marketing terms that reflect corporate talking points more than reality. Other times, “AI” is written about as if it’s all just one thing, when in fact it can mean anything from large language models to facial recognition technology to online shopping recommendation algorithms to self-driving cars.

But perhaps the most common mistake is personifying or anthropomorphizing artificial intelligence. When reporting on AI, many writers use verbs like “knows” or “understands,” which are not things that these tools are actually doing. I’ll admit it: I was initially a little skeptical that personification was a real problem. If viewing AI as human can sometimes be helpful for understanding it, then what’s the harm in using personification to explain an AI tool in an article, even if it’s not totally accurate? Is it so bad to lean into these metaphors?

The problem is that viewing AI as human is not always helpful for understanding what’s going on. Even setting aside any ethical questions about what it means to be human, personifying language can lead to unrealistic expectations about what an AI tool can do and how accurate we can assume it to be. Already, we’ve seen cases where assuming AI is like a human could cause harm, such as stories about chatbots declaring their love for their users.

What’s more, personifying AI can lead us to ignore the real people who design, test, and deploy these tools, and deserve the credit (or the blame when things go wrong). Talking about AI as an agent rather than as a tool obscures this reality. When an insurance company uses an AI tool to improperly decide that certain people shouldn’t get coverage or when police ignore contradictory evidence after using a facial recognition tool, personifying the artificial intelligence only helps them hide the fact that it’s them making the decision. This isn’t Star Trek or the Terminator – it’s just people, using new technology as an excuse for unpopular decisions. Accurate language makes this fact clear.

And while I appreciate the value of more poetic descriptions of the world, my education has taught me the value of accurate language, too. In math, precise definitions are the building blocks for the entire field. In public policy, vague phrasing can create a loophole in a law or make it easier for someone in power to avoid accountability. Accurate language is equally important in journalism. Certainly, analogies and metaphors can be powerful tools with the right context. But when it comes to educating people about the pressing issues of our time, we need to meet a higher standard.

As I complete my final months of grad school, I plan to continue my work on artificial intelligence, election policy, and campaign finance reform. As I deepen my expertise in these areas, I’m preparing for a career of working to bring about the change we need to see in policy and technology. I’ll do my part to make sure our future is more like Star Trek and less like the Terminator – while remembering not to take science fiction too seriously.

Tom Latkowski headshot

Browse More Posts

A Healthier Outlook on A.I.

A Healthier Outlook on A.I.

As a long-time advocate for strengthening our public health system in the US, I see parallels between public AI and the public health system.

Responsible Innovation in a Fast-Paced World

Responsible Innovation in a Fast-Paced World

We need a framework to guide the responsible development and use of new technology in the international economic arms race.