Image for Journalism at an AI Inflection Point
Photo: Unsplash | Illustrations by Joel Kimmel

Journalism at an AI Inflection Point

As artificial intelligence moves into newsrooms, four Nieman Fellows weigh in on the technology’s promise, limits, and risks.

As journalism grapples with another era-defining technological shift, few questions loom larger than how artificial intelligence will impact reporting, writing, and the relationships between journalists and their subjects. At a virtual Nieman-to-Nieman seminar in November, four Nieman Fellow alums — each working on a different aspect of AI through the lenses of journalism, technology, and education — discussed the hype, fear, and potential of artificial intelligence. 

Tyler Dukes (NF ’17)

Moderated by Tyler Dukes (NF ’17), lead editor for AI innovation in journalism at the McClatchy Media Co., the discussion brought together Uli Köppen (NF ’19), chief AI officer at public broadcaster Bayerischer Rundfunk in Munich; Ron Stodghill (NF ’01), professor at the Missouri School of Journalism; and Jaemark Tordecilla (NF ’24), a Philippines-based journalist, media adviser, and technologist focused on AI and newsroom innovation. 

Edited excerpts: 

Tyler Dukes: To kick us off, what is just one successful example — either personally or professionally — where you've used AI and have seen the benefits of that technology? 

Jaemark Tordecilla (NF '24)

Jaemark Tordecilla: In Manila, there's been a lot of outrage over infrastructure projects that have been riddled with corruption. Recently, the president published a website listing all the flood-control projects in the Philippines. I took 20 minutes to vibe code [a process of using natural language prompts in AI] a script to download all the projects from that [website]. It was useful for a couple of media organizations and NGOs that were researching those projects. I also posted it on social media so anyone can take the spreadsheet that I made available, load it to ChatGPT, and ask questions about the particular projects in their areas. It became this whole crowdsourced thing and a nice win for AI-use importing.

gray green and blue scale sketch of the journalist
Uli Köppen (NF ’19)

Uli Köppen: I am responsible for one of the biggest broadcasters in Europe. We have more than 20,000 journalists in the whole organization, so we [created] an AI assistant to help people in the newsroom be more effective and use their [time on] work, and focus on reporting and analysis, not so much on the versioning of their stories. 

We made a small chatbot to [explore] how we can combine our content with the content of other publishers, because we believe in the concept of a trusted content pool that we have as news publishers. We made a chatbot for the Oktoberfest, because we are Munich-based, and [the festival] is one of the biggest events here. As soon as we combined our content with the content of a private publisher, the answers improved. 

Ron Stodghill (NF ’01)

Dukes: A lot of these technologies are in the hands of these big tech companies: Meta, Google, and now OpenAI. Do you think newsrooms are trying to maintain independence from these companies while we still use their tech? 

Tordecilla: No, there's no conversation about that. OpenAI hasn't signed any licensing deals [with news outlets] outside the U.S. and Europe. So no newsroom from the Global South has a partnership to get compensation for their content. On the other hand, these tech companies are the only ones spending money to do training programs for small newsrooms in this region. We're talking about newsrooms that are already struggling with political harassment, the death of advertising, and the lack of any viable subscription models. There's really no choice but to dance with the devil. It doesn't help that the commercial products are the easiest to deploy inside newsrooms. You'll need infrastructure that these newsrooms don't have to deploy open-source models effectively. It's the same type of movie we've seen from the social media era, happening again with AI. 

Dukes: Ron, you’re teaching new journalists and helping them refine their skills. I'm curious about how you, your students, and your colleagues at the university are thinking about AI. 

Ron Stodghill: With the students, I'm surprised. I'm doing more with this technology than they are, but guess what? We've shamed them. [Our] journalism program is old, it's got tradition, and it's purist. I do believe that they think the superior way to go about storytelling and reporting is all human. And I'm sitting here thinking, “Yeah, but you’ve got a deadline too.” I think it will be a weakness in our curriculum if we don't lean in on it, to teach them how to use it ethically, responsibly — I think that's where we ought to be going. 

Dukes: There is this temptation with students to use generative AI in particular. How are you convincing students not to use this technology for shortcuts? 

Stodghill: I think that AI is a poor writer. There's a generic sheen that you see in that kind of writing that actually doesn't do the job. But to do journalism that connects and resonates … [AI] hasn’t gotten there yet. I think AI is going to allow us to further forget about [the craft], because if we suddenly become productive, we also become very average and unnotable. 

Dukes: I am least interested in the use of generative AI to write things, because we know how to write, and that's what we spend a lot of time doing. Fundamentally, that's not a problem that I am looking to solve. 

I'm thinking specifically about what happens when, say, 20,000 emails from the [American financier and convicted sex offender Jeffrey] Epstein investigation get dropped on reporters as they did [in November 2025]. How do you use these technologies to use their search capabilities to say, “Find me the things that are going to be relevant to our audiences. Point me to the source material, and then I can put my finger on that.” 

Tordecilla: I would push back about not using AI to write. In places where English is not the first language, they've been using it to write articles. The way I think about it is it's going to be like chocolate. Most of the chocolate we consume in the world is mass-produced. There's going to be chocolate that's artisanal, that's special, that people are willing to pay for. I suspect that's what's going to happen to writing as well.

For places where English is not the first language, I have been telling journalism professors here that we spend so much time trying to teach kids to write in English, perhaps we should be focusing on reporting and other skills, if writing can be commoditized with AI. 

Dukes: When you're talking to reporters, editors, and media leaders, how do you help them cut through that hype to get at the core of how AI could be useful in practical terms? 

Tordecilla: I think [the hype] goes away quite fast once they … try to use it for actual work. ChatGPT does a bad job making references to the original text, for example. And so they're really disabused [of] the notion that this is going to replace their jobs. The problem is that a lot of people in the newsroom don't ever get around to actually trying it for themselves, especially news managers. When I said the audiences are ahead of newsrooms, reporters are also ahead of managers in terms of use of the technology.

Köppen: I would shy away from [creating] text or graphics that are too generic. If you're using our AI assistant, it's a tool. We're telling our reporters to use it as a tool to take repetitive work off your shoulders, but don't use it for large amounts of content. 

We are shying away from using photorealistic AI images. If people can't distinguish between something that is generated and something that has really happened, that is no good for our news. But the technology is becoming better and better. Of course, you still have a lot of issues around [AI] hallucinations if you're using it to [generate] text. 

When we published our chatbot … we had teams looking at the results, making the system prompt and feeding [better] content into the chatbot to be sure that we could somehow control the hallucinations happening. If you decide to fully automate a user-facing product, then you really have to have those guardrails up. 

Dukes: Looking back on the last three years, what's the most important lesson that you have learned about how to ethically use these technologies? 

Köppen: Only use this technology to support your mission. You have to understand your mission and then you can use technology to support it.

Tordecilla: Technology is not magic, despite the hype, despite the cool demos. Once you get around to using it for real-life work, we realize its limitations. Start with what problem you are trying to solve before using it as a solution.


Stodghill: Institutions that have traffic and information have always been suspect to certain groups of marginalized people. I think this is a moment where we need to be thoughtful. I don't think we should write off this technology. I think it could be a tool of progress and liberation. I think we should embrace it, but cautiously, and bring the right people around us to keep the humanity in our work.