Gina Chua, executive director of the Tow-Knight Center at the Craig Newmark Graduate School of Journalism at the City University of New York (CUNY) and executive editor-at-large at Semafor, has been at the forefront of digital innovation in newsrooms worldwide. Previously, she served as executive editor of Reuters, overseeing editorial operations and collaborating with technology teams to develop newsroom tools. Today, Chua focuses on helping journalists and news consumers navigate an information landscape being rapidly transformed by AI.
In a recent discussion with the current class of Nieman Fellows, Chua spoke about the strengths and limitations of AI, the skills journalists need in this new environment, and how the media has repeatedly misread major technological shifts.
Edited excerpts:
The [founding] principle for this center is that AI will fundamentally change how people come to information, and we have to understand how that’s happening. My line is: You go to war with the audience you have, not the audience you would like to have. There’s no point lecturing people about how they should read news, or what news is good for them, or how they should trust us, or whatever, because they’re going to do what they’re going to do.
The goal of the center is to try and figure out what our role is if we believe in our mission of public interest information. We missed the internet revolution because we wanted it to be what we wanted it to be, not what it turned out to be. We missed the social media revolution because we wanted it to be what we wanted it to be, and it turned out to be something completely different. If we keep insisting AI has to be what we want it to be, we will lose. … If we don’t look up and look at what the world is becoming, we are not going to be in a good place.
The BBC and the European Broadcasting Union just came out with a report where they essentially said AI is really terrible, and there are lots of hallucinations. Forty-five percent of the stories have a problem. I have some problems with their methodology, but the core findings are not wrong: … They say 7% of people use AI for news regularly. Fifteen percent of 18-to-25-year-olds use it regularly. I mean, that number is going to grow, as the quality of the information grows; we’re going to live in that world. And there’s no point saying: “Those guys are really stupid and maybe they should do something better,” because they’re not going to do something better.
It’s not even AI’s fault that we’re losing traffic, and we’re losing trust. AI is coming and changing it, and I firmly believe, one, that we have to adjust to the moment. … And, two, [that] there are real opportunities, that we can do our mission better if we understand what AI does well, what it does badly, and how we can essentially use it to fulfill our mission.
When you have AI, you have a sudden drop in the cost of production, a sudden ability to create new sets of information. I think we’re going to be shifting to very small, very tight communities of information where the value is less in the work you create, but in your understanding of [the audience’s] needs.
I often get asked, “Well, what about our jobs and what about staying employed?” My answer has always been: We are not in the business of keeping journalists employed. We’re not even in the business of preserving journalism, per se. We’re in the business of preserving public information.
So, if we can do it with no people, if we can do it with some people, we’ll do it with as many people as we need to do it with, right? My analogy has been this: Let’s assume that we were a group of doctors, and we’re sitting around and somebody said, “Hey, I have this new tool that will really help public health outcomes overall, but 80% of us will be unemployed.” And then the doctor says, “We can’t let that happen. We need jobs.” You know how we would report on that meeting, right? I mean, we’d be like, “Doctors don’t [care] about public health, they just want to keep their jobs.” The point is just to say: Asking questions, gathering facts, and community-building, that’s where I think our value lies.
If you become just a supplier of information, you go out of business pretty quickly. The value, as most tech companies have realized, is in that last-mile relationship with the user. That’s why I keep thinking that news organizations have to bond with their audiences. If you don’t know what your audience wants, then you’re just a purveyor of straight-up facts. If you don’t have a relationship with your customer, that’s a real problem.
Think about what we do. What is the value chain of journalism? We ask questions — ideally good questions. You gather facts, you add analysis — sometimes by yourself, sometimes [with] machines, sometimes with experts. You add context, you create narrative. You distribute and you engage. We gave up distribution some time ago. We say we engage. We don’t really try, but we should, and we put almost all our soul and our identity into the creation of narrative, because that’s what we do. We’re storytellers. Well, there’s a machine that now does narrative better than most of us, right?
The question is: What are the core skills we truly need to have, and what are the skills that we can usefully outsource to machines? I can’t walk 100 miles. I’m sure back in the day, people could, and they did, and now I don’t have to. So what are the skills that are really, really critical, and which ones are not?
Journalism education does have a real role to play in giving people really core skills, [such as] finding verification, asking questions, and so on. I don’t know what the answer to that is, partly because AI is changing. Every 20 days, something new comes up. No one has ever called me an optimist. But as I look at the editing tools that I’m building [using AI], … if we can build those tools, and if we can encourage people to use them, I really think that they can be interesting.
I think we have to accept that this is the world that we live in. … You know, cars are terrible — they pollute and they kill too many people. And, you know what? They also move us around. And so I do think we can do both things simultaneously. I think we have to keep looking at the core infrastructure of the world, and this is going to be the core infrastructure of the world — I think we just have to kind of accept that.
Humans do a lot of great things. … Humans also make a lot of really bad decisions. How can we not overemphasize, embellish, or deify humanity, and not just simply condemn machines because they’re different? What can we do to build the two together? How can we build a cybernetic newsroom? How can we have the best of machines and the best of humans? Not make machines cheap and bad versions of humans, and not make humans bad versions of machines. Machines are good at some things, humans are good at some things. Let’s put those two together, and maybe we will actually move forward as a species.