Journalists, like everyone else, are embracing AI—from using machine learning to analyze data to relying on AI tools for transcription, translation, and even drafting headlines and articles.
While AI certainly holds the potential to improve efficiency, it also raises a host of ethical issues for reporters. AI might be powerful, but it’s also hardly perfect. Its history is tied to plagiarism, inaccurate citations to sources, and “hallucinated” information, all of which work against a journalist’s credibility. AI has also been known to show bias, another cardinal sin among reporters.
Newsrooms using AI also risk further undermining the media’s overall credibility. According to a 2024 Gallup poll, trust in the U.S. media has been dropping steadily for fifty years. Today, roughly 35 percent of all Americans trust traditional media, down from about 70 percent in the early 1970s.
In June, the non-profit media institute Poynter released ethics guidelines to help U.S. newsrooms develop responsible policies for using artificial intelligence. The updated framework, building on last year’s version, aims to help newsrooms enhance their reporting with AI while addressing potential credibility concerns among readers.
The guidelines shed light on exactly what newsrooms are grappling with in the age of AI, suggesting that publications prioritize transparency, accuracy, human oversight, privacy and security, accountability, and exploration.
Poynter also notes that journalists should only use AI for “journalism-centered intentions,” and that are cleared by the editors and reporters who oversee AI in their newsrooms. No matter what, verification of AI-generated text, data, or images should always be done by real people.
The guidelines also emphasize that reporters should avoid, at all costs, full writes and rewrites using AI, and instead rely on AI only for specific edits.
So, which publications are actually following Poynter’s recommendations?
Newsroom guidelines for implementing AI
Newsrooms today are continually assessing the best way to leverage AI without compromising journalistic integrity. Here’s a sampling of how different newsrooms—each with its own approach—are addressing the use of AI:
- WIRED: Wired was at the leading edge of approaching the use of AI in the newsroom ethically and responsibly. They published their AI policies nearly two years ago. Notably, they don’t publish writing that has been edited or written by AI.
- The New York Times: Although the Times states that “we don’t use A.I. to write articles, and journalists are ultimately responsible for everything that we publish,” the publication still utilizes AI tools for investigative work and implements a human-in-the-loop review process for any content initially drafted or researched by AI.
- The Washington Post: In April of this year, WaPo entered a partnership with OpenAI to ensure that its journalism appears in ChatGPT searches, according to the publication.
- The Los Angeles Times and its Op-Ed pages: Earlier this year, the editor of the Times announced “AI-generated counterpoints” would appear next to each Op-Ed, a move that was met with criticism.
- Politico: Earlier this year, Politico unveiled an AI-driven Policy Intelligence Assistant, which helps policy professionals better navigate and use Politico’s newsroom.
Like journalists, PR professionals are also navigating how to use AI ethically and effectively. It’s crucial that communications teams are familiar with all the different ways newsrooms are using and experimenting with AI, as it will change how they approach their work.
AI is transforming the way journalists write and engage with their audiences. For companies engaging with the media, understanding how newsrooms are using this technology is more important than ever.