The Challenge
There are plenty of examples of bad writing about artificial intelligence in the world, and it can be hard to know where to look to get it right. As you read this, people around the world are learning about these tools for the first time. They need help finding the words to describe these complex systems in an accessible, accurate manner.
Our Approach
Aspen Digital is committed to helping journalists, civil society organizations, and public policymakers communicate better about today’s headline-grabbing AI technologies. This work has included educational primers on AI, lesson plans on AI and media literacy, and the 2023 Reporting on AI Hall of Fame, which celebrated exemplary short descriptions of AI tools in reporting.
We’ve partnered with The Online News Association this year to bring back the Reporting on AI Hall of Fame. This time, we will be elevating entire pieces produced between April 3, 2025 and March 30, 2026 that clearly and accurately describe AI technologies from around the world.
Submissions for the 2025 Reporting on AI Hall of Fame close on Thursday, May 14, 2026 at 11:59 p.m. EDT (3:59 a.m. UTC).
Our Rubric
Clear and Accessible Descriptions of AI
- Emphasizes human actors when describing the creation, deployment, and impacts of AI tools. Content is clear about who is making the decisions, whether as individuals, organizations, companies, or professions. For example instead of “the AI did this” say “The company uses the AI to do …” or “researchers developed the AI system to …”
- Avoids anthropomorphization by employing action verbs appropriate to non-living systems (e.g., “generates,” “produces a representation,” or “processes,” rather than “writes,” “believes,” or “understands”). While generally best avoided, personifying terms may be acceptable when clearly used in simile or within “scare quotes.”
- Uses accessible language and imagery to convey meaning and define terminology where appropriate. Some words used in the AI space have overlapping meanings in different contexts like for “accuracy,” “bias,” “hallucination,” “AGI,” “world model,” etc.
- If relevant, images within the piece clarify how the AI system is being used, how it was built, or how it is having impact rather than relying on tired tropes like ones and zeroes and unrelated scifi robots. Better examples might include images showing scale for training data use or a graph tracking energy use of a data center. For more examples, see this short report on Better Images of AI.
Grounded Discussion of AI Capabilities
- Uses specific language to talk about the type(s) of AI system being discussed (rather than just lumping everything together as “AI”). For example, instead of using a “Human Resources AI for job interviews,” the journalist talks about “an emotion recognition system that scores interview recordings.” It may be useful to reference (and explain) specific common architectures like RAG (retrieval augmented generation) or LEOM (large earth observation model), or specific applications like object detection or image generation.
- Explicitly states or explores the design context, strengths, and limitations of the AI system instead of presenting a product as infallible or in marketing terms. States the original use case for the AI system if describing an alternative use case. For example, the journalist mentions if a specific chatbot was designed as a customer-service bot for auto sales, but in the piece it is being used as a mental health chatbot.
Journalistic Transparency & Representativeness
- Incorporates a diverse range of perspectives and voices that contribute to a full picture of the issue. The piece should not exclusively rely on sources within the technology industry but should include academic researchers, policy experts, and/or impacted community members.
- Accurately presents technical information, including providing corrections for any false or misleading information. For example, “‘It’s fully automated, so you don’t have to worry about privacy,’ Dr. Example says. The company clarified that there are human reviewers for quality assurance.”
- Includes data provenance information if the piece focuses on data used to build or deploy an AI system. For example, if mentioning a dataset, the author should mention where the data comes from, who made it, if personal information was anonymized, or if there are known issues with it like CSAM inclusion or racist labels on images.
Acknowledgements
This work is made possible thanks to generous support from the Patrick J. McGovern Foundation.


