The A.I. Elections Advisory Council

A community of leaders working to build social trust and election resilience.

The AI Elections Advisory Council is a constructive space for experts to share key insights across civil society, government, and industry throughout the US elections.

Aspen Digital

Calling for 2023’s best reporting on A.I.

A close-up photograph of a retro typewriter to contrast this call for best reporting on AI.
January 8, 2024

The development of artificial intelligence (AI) will have profound implications for our world. Already, there are many positive use cases that people are exploring, like solving the scientific puzzle of protein folding, improving global climate modeling, and helping spot dangerous landmines left behind from past wars. At the same time, many have raised valid concerns that AI could be used to accelerate mis- and disinformation, disrupt the labor force, or even cause large-scale catastrophes.

As we sort through these implications, mitigating the harms and taking advantage of the benefits, we need to be thoughtful and listen carefully to a diverse set of voices. A wide-ranging, well-informed public conversation is essential. Journalists have an important role to play in educating the public and helping people to understand how AI is showing up in their lives.

But AI is complicated, even for technical experts. For the general public, it can be tough to follow what the technology even does and how it actually works, to separate popular misconceptions and industry hype from reality.

Even for people with the best intentions, it’s easy to fall back on inaccurate or misleading tropes when talking about these emerging technologies. Sometimes, reporters use personification as a tool to help people understand how AI works or what AI is doing. Terms like “learned” or “understood” get thrown around frequently because they relate to how humans make sense of information. Although personification can be useful, it can lead people to believe that AI systems are more capable than they really are.

At the same time that the popular conversation makes these computers seem more human, it can also obscure the roles that actual humans play in creating, shaping, and using artificial intelligence. Real people decide how AI systems are designed and what outcomes they are built to prioritize. Engineers, product managers, IT professionals, and everyday users are the ones who decide how these systems get used in the world. Even the training process, much of which happens algorithmically, involves humans. Despite all the attention on ChatGPT, for example, there has been relatively little focus on the workers behind the tool who continue to fine-tune the model. Talking about the AI model itself as the agent obfuscates all the people involved in the process.

The best AI journalism avoids these pitfalls. It communicates clearly about how AI models are trained and developed, what these systems actually do when used, and who is involved with each of these processes and decisions. Great descriptions of AI usually do the following: 

  • Use accessible language to convey meaning and define terminology where appropriate
  • Emphasize human actors (whether as individuals or organizations, or in professional roles) when describing the creation, deployment, and impacts of AI tools
  • Employ action verbs appropriate to non-living systems (e.g. “generates,” “produces,” or “processes,” rather than “writes,” “believes,” or “understands”).
    Note: while generally best avoided, personifying terms may be acceptable when clearly used in simile or within “scare quotes.”
  • Describe current capabilities of AI tools in a factual manner, distinct from marketing claims.

Below are some examples of excellent descriptions of AI that we’ve come across:

For example, the city of Newark, New Jersey, used risk terrain modeling (RTM) to identify locations with the highest likelihood of aggravated assaults. Developed by Rutgers University researchers, RTM matches crime data with information about land use to identify trends that could be triggering crimes.

What makes it great: Names the specific tool (RTM), clarifies both the tool developers and users (Rutgers and the city of Newark), and explains the specific application of the tool (to identify locations of highest likelihood of aggravated assaults).

Google, for example, used its DeepMind AI to reduce the energy used for cooling its data centers by up to 40 percent.

What makes it great: Concise and accessible. Names the specific tool (DeepMind AI) and organization using it (Google) as well as the particular use (to reduce energy use).

Many elections offices use algorithmic systems to maintain voter registration databases and verify mail ballot signatures, among other tasks.

What makes it great: Concise and accessible. Names specific users (elections offices) and purposes (to maintain voter registration databases, etc).

The researchers dubbed these anomalous tokens “unspeakable” by ChatGPT, and their existence highlights both how AI models are inscrutable black boxes without clear explanations for their behavior, and how they can have unexpected limitations and failure modes. ChatGPT has been used to generate convincing essays, articles, and has even passed academic exams.

What makes it great: Uses scare quotes for anthropomorphic language (“unspeakable”). Names a specific tool (ChatGPT) and implies human users (has been used).

In our work with journalists, we’ve uncovered a need to elevate more great examples of AI reporting. That’s why we put out an open call for submissions of great short descriptions of AI from 2023. We will compile the examples into the Aspen Digital Reporting on AI Hall of Fame:

Tom Latkowski headshot

Tom Latkowski is an MPP candidate at Georgetown’s McCourt School of Public Policy who currently works as a Google U.S. Public Policy Fellow at the Aspen Institute. Tom has previously interned at the White House Domestic Policy Council and the Office of Senator Dianne Feinstein. Tom has previously worked on campaign finance reform, including writing a book on democracy vouchers, and co-founding an organization to advocate for campaign finance reform in Los Angeles. Tom attended UCLA for his bachelor’s degree, where he studied Political Science and Applied Mathematics.

Browse More Posts

We Need Better Reporting on A.I.

We Need Better Reporting on A.I.

While it’s great for fiction to inspire what the future could look like, it can be dangerous if those fantasies creep into reporting on AI.

Frontline A.I. Policy Brief

Frontline A.I. Policy Brief

Policymakers have the opportunity to reduce automation risks, future-proof workforces, and increase retention in manufacturing. Here’s how.