Journalism’s ‘Normal Accidents’

By exploring theories about how organizations fail, a journalist understands better what is happening in newsrooms and why.
As The New York Times went through its hell last spring, I marveled, as others must have, at the sequence of destructive events consuming it. Who could have imagined that a very junior reporter could have conned the paper so outrageously and for so long? Or that his eventual firing would lead to the departure of a Pulitzer Prize-winning writer, chaos in the newsroom, resignations of the two top editors, and the mortification of the publisher who, only days before, had declared his faith in his executive editor?

Actually it happens—not every day, but frequently enough. Many news organizations have gone through searing difficulties for which there were ample warning signs that were ignored or misinterpreted.

Like the Times, other news organizations, in times of stress, have had to deal with unforeseen and seemingly unrelated events that made a satisfactory resolution impossible. Though details of these cases differ, they can be analyzed for what they had in common before the first obvious sign of trouble, for how the organizations responded to the problems, and for the ways in which the troubles spread out of control.

I came to this line of analysis a few years ago after reading a gripping account by William Langewiesche in The Atlantic about the crash of an airliner into the Florida Everglades. ValuJet 592 caught fire after oxygen canisters in the cargo hold exploded. The real cause of the crash may have been more complicated. Langewiesche suggested that the crash was the result of an intricate and unpredictable set of events involving a new type of airline spawned by deregulation, the contractors that served ValuJet, and the government agencies that were supposed to oversee it. The crash, he concluded, was what organizational theorists call “a normal accident.”

Paradoxical as it may sound, accidents in some organizations can be considered normal. Such organizations have complicated and highly interactive components. Their operating systems are tightly coupled, providing little room for recovery.

News organizations do not involve risky technologies in the way nuclear power plants, petrochemical factories, or even airliners do. Death and destruction do not occur when media organizations crash, though the loss of public confidence in the press can be widespread and damaging to society. Not all of the features of normal accident theory apply to journalism, but enough do that the theory can be a way of looking at what has been happening in America’s news organizations.

In terms of its mission and its protection under the First Amendment, journalism is a unique industry. But in some ways, as we shall see, it shares organizational similarities with other industries, some prone to normal accidents. If journalists cannot recognize this, they won’t be able to understand why certain things go wrong again and again. And they will be handicapped not just in their ability to prevent disasters, but also in their capacity to effectively serve the First Amendment.

What Jayson Blair did at the Times was no accident. It was deliberate. That is true with disasters at other news organizations. They are not “accidents,” as we commonly understand the word, but the results of transgressions such as plagiarism, poor judgment, or many of the other things that journalists do that they shouldn’t.

But what came to light after Blair was uncovered, and what happened after he committed his deception, will be familiar to anyone who has looked at normal accident theory. The same can be said of problems at other news organizations. Normal accident theory will not explain everything, but it explains a good deal.

Back in 1993, Doug Underwood, a former Gannett journalist, wrote a book called “When MBAs Rule the Newsroom.” “On virtually every front,” he declared, “the newspaper industry’s approach has been to get its members to adapt the corporate ‘management’ and ‘marketing’ solutions to handling their difficulties.” In the years following, the emphasis on management throughout the news industry has been unprecedented, and yet the landscape is littered with disasters.

News organizations large and small have been brought low by plagiarism, by theft of information or the sale of it, by stories that proved to be fiction instead of fact, by spectacular breaches of the firewall between news and business, by postings of sensational but false information on the Web, by embarrassing failures to get right such big stories as the CNN/Time Tailwind account of how nerve gas supposedly was used by the United States in Indochina, and the San Jose Mercury News’s “Dark Alliance” series, which accused the CIA of introducing crack cocaine into inner cities.

Is there a connecting thread among them? Is there something about news organizations—how they are managed, structured and equipped with new technologies—that might offer us an explanation?

When the Theory Applies to Newsrooms

As a formal inquiry, normal accident theory began in 1984 with a Yale sociologist named Charles Perrow and his book “Normal Accidents: Living With High-Risk Technologies.” Many characteristics of normal accidents, as he defined them, are common to problems news organizations experience. Perrow, however, was not the first scholar to examine accidents in a new way. A better place to start is with the late British sociologist Barry A. Turner, who made a close study of 84 accidents in the United Kingdom. In “Man-Made Disasters,” published in 1979, Turner noted two phenomena that are found in many journalistic train wrecks.

Turner observed that man-made disasters don’t happen out of the blue. Typically, they have an incubation period, a time when unnoticed sets of problems begin accumulating. These periods certainly occur in journalism. Take the case of Patricia Smith, The Boston Globe columnist who invented people and events and had to resign in 1999. Her record was full of warning signs about fabrications, some dating more than a dozen years to her work at the Chicago Sun-Times. Editors let them slide.

Second, Turner saw that relevant detail often is buried within a mass of irrelevant information. People don’t go around with their eyes closed but, as Turner notes, “A way of seeing is always also a way of not seeing.” In one instance, Turner found that memos that might have prevented a deadly rail accident went unread because they were regarded as “flotsam” in the system.

That Jayson Blair’s expense accounts contained contradictory information is a significant clue that he never went to places where he supposedly did his reporting. Expenses that he turned in for a meal in Washington, for example, had with them a receipt from a restaurant in Brooklyn. Yet frequently problems with expense accounts go unnoticed, sometimes for months. And Blair’s expense accounts were reviewed not by editors familiar with his assignment but by an administrative assistant.

In another example, during the planning period before the Los Angeles Times devoted its Sunday magazine to the Staples Center in October 1998 under a profit-sharing agreement, editors attended meetings where clues to the deal could easily have been recognized. At one, as recounted by the paper’s media critic, David Shaw, a 23-page document detailing the arrangement, including “revenue opportunities,” was available. Twenty-three pages of business text can be flotsam to busy journalists.

Though Turner’s work went largely unnoticed in America, Perrow’s book was quickly recognized for its significance. Normal accidents, he wrote, occur in organizations characterized by interactive complexity and tight coupling within their systems. In these situations, he wrote “multiple and unexpected interactions of failures are inevitable.”

Not every nuclear power plant will have a serious accident. But given the nature of such installations, somewhere, sometime, accidents are inevitable. Not every paper will find its reporters have made stories up or plagiarized or sold information to tabloids. But the way newsrooms are organized and managed means some disasters are also inevitable there.

News organizations today are characterized by interactive complexity. The growing dependence on convergence of technologies (print, online, broadcast) requires it. Yet even beyond technologies, news organizations have become complicated places through the developments of newsrooms without walls, the introduction of team reporting systems, and the dispersal of authority that once rested with middle management. Decentralized authority can bring perspectives and reporting power from smaller units into a large journalistic project. But the coordination requirements are higher and, as each unit works its piece of the whole, the likelihood of unnoticed problems increases.

In the 1998 CNN/Time Tailwind story alleging that the United States used nerve gas in Laos, the association between the magazine and the network represented a new interactive complexity in journalism. Time lent its name to the project but was involved very little in its preparation. CNN was ill-equipped to handle an investigation of this magnitude. Time’s fact-checking system was suspended after its editors were convinced by a CNN summary that the project was sufficiently researched. CNN never consulted its own military experts. According to the Columbia Journalism Review’s story about this situation, the head of CNN/USA read the 156-page briefing book on the program only after it had been broadcast.

Missed Warnings and Other Newsroom Issues

In his time at The New York Times, Jayson Blair passed through at least four separate units: the internship program, the metropolitan desk, the sports department, and the national desk. They were interactive, linked with central newsroom authorities, but there were plenty of cracks into which warning signs could fall—such as a memo declaring that Blair must be stopped from writing for the Times.

With hindsight, we find it hard to believe such warnings are ignored. Yet normal accident theory teaches us that warnings often receive little attention. As Langewiesche notes: Murphy’s Law is wrong. Perrow has shown that what can go wrong almost always does go right. Otherwise, who would ever get into a car, much less ride on an airplane?

Nowhere is Perrow’s point made more persuasively than in the work of Diane Vaughan, a Boston College sociologist. Her study of the 1986 shuttle disaster, “The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA,” should be read by every editor and publisher.

The shuttle exploded because of the failure of the Viton O-rings, which seal joints at the end of the solid rocket boosters. Before launch there were concerns whether the O-rings would hold in the cold weather that January in Florida. Despite the misgivings of engineers, the mission proceeded, and the seven astronauts were killed when the Challenger exploded. But Vaughan found that in the previous year, O-ring problems had been discovered in seven of the nine shuttle launches. Murphy’s Law was wrong. What could go wrong with the O-rings hadn’t. Vaughan coined a wonderful phrase, “the normalization of deviance,” and these words can be applied to newsrooms everywhere.

Put simply, normalization of deviance occurs when professional standards progressively decline, and the boundaries of acceptable practices are stretched. When what can go wrong goes right seven times or 70, we are tempted to try it again. But the next time, the shuttle might blow up or a billion dollar libel suit land in our laps.

New technologies have coupled newsrooms together more tightly than ever. Photos taken in the field are processed through digital editing systems, and they may get into the newspaper without editors seeing more than a fleeting image on a computer screen. An artful deception, such as the composite photo from Iraq that cost the Los Angeles Times photographer Brian Walski his job, is more likely to go unchallenged in the digital age than had he patched together images in the darkroom.

Under the pressure to get news on its Web site, editors can be rushed into posting unverified information that goes instantly to readers. If the material is false—as it was in the case of postings by The Dallas Morning News and The Wall Street Journal in the Clinton-Lewinsky scandal—embarrassing damage is done before an audience of millions.

The furor that followed the revelations about Blair reveals another kind of close coupling. Even as Times executives were trying to put the incident behind them, Internet postings, most particularly on Jim Romenesko’s Web site, kept the issue alive, generating still more problems for the paper in the form of rumors and accusations. Suddenly the Times was confronted with challenges to its reporting by Judith Miller in Iraq. Messages from Times staffers about Executive Editor Howell Raines crackled with indignation about the way he managed. By extending and fanning the controversy, the Internet became an important element in the eventual resignations of Raines and Managing Editor Gerald Boyd.

In contrast to normal accident theory, another theory holds that training, redundancy of safety systems, and a strong organizational culture can drastically reduce accidents. This is called high reliability theory and aircraft carrier operations and the handling of nuclear weapons are cited as evidence of its validity. Unfortunately, journalism is ill-suited to take advantage of it.

Deviance Becomes Normal in Newsrooms

The findings of Scott D. Sagan, a Stanford political scientist, are relevant here. In his book, “The Limits of Safety: Organizations, Accidents, and Nuclear Weapons,” Sagan writes that to achieve high reliability, organizations need to maintain “a strong organizational culture—in the form of intense socialization, strict discipline, and isolation from the problems of broader society ….” Few newsrooms can function under such requirements. Aircraft carrier deck crews can carry out crash exercises until every person knows exactly what to do. How many newspapers can or should conduct weekly all-hands plagiarism drills?

Sagan also describes what theorists call “garbage can” organizations. I would place news organizations among them. In the garbage can model, organizations frequently lack clear and consistent objectives. The publisher, the editor, and the assistant metro editor handling a sensitive story might have very different ideas about the paper’s mission or even their own responsibilities. Garbage can organizations use “unclear technologies,” whose processes are not adequately understood by all the members of the group. The online editor might know exactly what should go on the Web. Journalists doing the stories might not. In garbage can organizations, the decision-making process is fluid: “Participants come and go; some pay attention, while others do not; key meetings may be dominated by biased, uninformed, or even uninterested personnel.” What journalist has not worked in a newsroom like that?

Garbage can organizations are also prime candidates for normal accidents. When failures occur, analyses tend to pinpoint culprits or faulty equipment and absolve the institution. Instead of recognizing how organizations might have failed, such analyses often conclude with high-minded reaffirmations of institutional values. The bad apple has been identified, and all will be well again.

“The person who did this was Jayson Blair,” said Arthur Sulzburger, Jr., publisher of The New York Times. “Let’s not begin to demonize our executives ….” In fact, Sulzburger’s troubles were just starting.

Safety measures themselves often contribute to normal accidents. As Sagan points out, the problems at Russia’s Chernobyl nuclear power plant began with the testing of a new safety system. In the conflagration at the Times, as we shall see later, at least two significant safety measures failed to put out the fire, and both made matters worse.

Once, clerks passed out galley proofs to copyeditors who read them carefully before or between editions. It was a ritual that was socially enforceable. Copyeditors who weren’t going through their pile of proofs were obviously not doing their job. Today, in many newsrooms, proofs are gone—often for budget reasons. Editors try to catch the story in the computer system.

That’s a small degradation of standards, and usually nothing goes wrong. But simple errors that copyeditors miss have significant consequences. As the 1999 Credibility Project of the American Society of Newspaper Editors (ASNE) reported, “Small errors undermine public confidence in the press, and the public finds lots of them in the paper.”

For a large degradation, take the case of the Los Angeles Times. After Mark Willes came to the paper in 1995 as its chief executive officer, he declared that he would knock down the firewall between the news and the business sides with “a bazooka, if necessary.” Despite his colorful language, the idea was hardly revolutionary. By then, the concept of “total newspaper management” was already in place in many organizations. But Willes went at it with a vengeance. In his report of that newspaper’s Staples Center disaster, the Los Angeles Times media critic, David Shaw, suggested why it took so long for journalists to recognize that Staples presented a serious threat to the paper’s credibility.

Shaw wrote that “the most important factor in the lag time between first word and big explosion may well have been a gradual, insidious change in the climate at the paper; so many in the newsroom had become so inured to intrusions by business-side considerations in the editorial process that they were desensitized and demoralized.” Bit by bit, the deviant “intrusions” of business had become the norm.

The Clinton-Lewinsky scandal in 1998 provides excellent examples of a vast collective normalization of deviance. Once, it was understood that professional standards meant that stories must be sourced. If information had to be presented anonymously, there were rules, such as the requirement for two independent confirmations. The Committee of Concerned Journalists found that in the pivotal first six days of the story, which set the tone for the coverage to follow, only one statement in a 100 was based on two or more named sources, and 40 percent of all reporting based on anonymous sources used but a single source.

The most insidious normalization of deviance involves the decline in the vigilance of editors against errors. Every editor will say that accuracy comes first. But note what ties together Jayson Blair, Patricia Smith, Mike Barnicle, Janet Cooke, Stephen Glass, and Ruth Shalit, to mention just a few high profile miscreants who stole the words of other writers or faked their stories.

All were valued for their narrative skills, for their “storytelling” ability. None was recognized for reporting skills. As Barbara Crossette, the former New York Times’s bureau chief at the United Nations, wrote recently, “Bright writing now brings the most and quickest rewards inside news organizations.”

There need be no conflict between bright writing and solid reporting. But the pressures upon reporters to include “real people,” provide vivid detail, always show instead of tell, and the extravagant praise that is heaped upon inexperienced journalists for bringing these things to their stories can create a dangerous situation. A few years ago, ASNE published a handbook of journalism values, asserting “When it comes to accuracy, the ‘right facts’ means … coverage that ‘rings true” to readers.” What “rings true” might not be true at all but merely conventional wisdom or stereotype.

In “News Values: Ideas for an Information Age,” Jack Fuller, president and CEO of Tribune Publishing Company, writes, “Reporters who do not meet the simple standard of accuracy should not be taken seriously ….” Editors at The New York Times would surely agree. Yet, despite the staggering number of corrections made to Blair’s stories, he was taken very seriously.

Jayson Blair and the Times

The Blair case exhibits how normal accident theory can apply to journalism. Earlier I noted how interactive complexity and close coupling, characteristics of organizations that experience normal accidents, were present at the Times. I also have acknowledged that normal accident theory does not cover every aspect of the case.

Blair, a 22-year-old African American, joined the Times as an intern in 1998 and the following year became a full-time reporter. He had no college degree, and in the journalism school at the University of Maryland and later as an intern at The Boston Globe, Blair had been a controversial figure. Editors at the Times found him bright, personable and seemingly candid. The paper apparently assumed that he had finished college. The check on his background appears to have been perfunctory, the warning signals dismissed. The problem was incubating.

The internship program at the Times, where Blair began, was intended to bring minority journalists into the newsroom, and there is speculation that Blair was later allowed to get away with shoddy practices because of his race. The Times denies it. In fact, for this analysis, his race is not very significant.

What is important to observe is that diversity has moved beyond being a value to which news organizations commit themselves. It has become an operating system. In some news organizations, diversity is quantified and incorporated into managers’ compensations, and it has become a key element of news stories, without which news articles might be judged incomplete. As an operating system, therefore, diversity becomes one more piece of interactive complexity.

As Blair progressed, he produced stories that were not only highly praised but also contained many mistakes requiring corrections. By early 2002, his performance was raising concerns by some within the paper. Jonathan Landman, the metropolitan editor, sent the warning saying that this error-prone reporter had to be stopped from writing for the paper.

Confronted with his shortcomings, Blair took a short leave. When he returned, the Times had a “tough love” regimen for him. Editors noted that his work became more accurate. The Times believed that a safety measure had succeeded. It hadn’t. Instead, the editors’ restored confidence gave Blair more opportunities to betray the paper. Blair was assigned to the sports desk and then, for unclear reasons, sent to Washington to work as one of the reporters on the breaking sniper story. There he produced several scoops. Complaints that these were false were noted but not acted upon.

When the San Antonio Express-News late in April complained that Blair stole its story about a Texas woman whose son was missing in Iraq, his string finally ran out. Rousing itself at last, the Times confronted the young reporter and began investigating his work. Not only had he plagiarized, he had been inventing sources and facts and had gone to elaborate lengths pretending to report from cities he had never visited.

Jayson Blair was fired, and shortly thereafter the newspaper published a lengthy report on what he did. An extraordinary meeting followed at which Sulzburger, Raines and Boyd addressed the Times’s staff. The idea was to have an open exchange, to hear an explanation of why Blair’s work was tolerated, and for Raines to acknowledge his grating management style.

But the meeting did not put out the fire. Subsequent Internet exchanges revealed the depth of the divisions on the staff. Another safety fix had failed. Indeed, it only made things worse by encouraging and legitimizing angry criticism of management.

Two weeks after Blair left the Times, the publicity about the news operations at the paper and its bogus stories claimed another journalist. This time it was Rick Bragg, the paper’s Pulitzer Prize-winning stylist who had relied on the reporting of a personal stringer for stories he detailed so colorfully and on which only his byline appeared. Bragg, too, had risen on the wings of his storytelling ability. And again, it was a complaint from outside the paper that brought a journalist down. Other staffers in the newsroom were outraged anew, this time by the implication that here, at journalism’s gold standard, bylines and datelines meant nothing. The fire Jayson Blair had lit, that had smoldered for years, was now burning out of control.

Sulzburger had said the paper’s executives should not be demonized and that he would not accept Raines’s resignation were it offered. What changed his mind is hard to tell. Perhaps it was a meeting with the Washington bureau that convinced him the two top editors had to go. The resentment against the autocratic Raines was too deep, too intense, to be repaired. Boyd, the managing editor, was better liked, but he was Raines’s handpicked deputy and moreover he, too, was tainted by the Blair affair. That he also resigned was not a surprise.

Two major efforts to contain the fire had failed. It had burned in ways that no one could have predicted and had scorched people far from its origin. Now it threatened to consume the entire organization. Sulzburger took the most drastic step available to him, short of his own resignation. In forestry, it is called a backfire, a blaze lit to stop the progress of an even greater fire.

On June 5, 2003, just a little more than a month after the San Antonio paper had blown the whistle on Jayson Blair, Raines and Boyd resigned. As the two top news executives said farewell to the newsroom, Sulzburger announced that Raines would be replaced on an interim basis by Joseph Lelyveld, who had retired as executive editor in 2001. The paper, Sulzburger said sadly, had seen both good times and bad.

“We will learn from them, and we will grow from them,” he said. “And we will return to doing journalism at this newspaper because that’s what we’re here for.”

Lessons to Be Learned

RELATED ARTICLE
"The Siegal Committee Report"
- William F. Woo
Sulzburger’s remarks were meant to bring closure to the episode. Whether they will is doubtful. An investigative committee directed by a Times editor, Allan M. Siegal, and charged with sorting through the troubles to determine what went wrong in the newsroom and why, has produced its report as have two other working groups. Their findings indicate that the Times has a great deal of work ahead to undo damage not only caused by the Blair affair but also by accumulated practice. Moreover, though the problem of the “rogue journalist,” as the Siegal committee refers to Blair, might have been solved, normal accident theory suggests that other difficulties have been created.

By examining the case of Jayson Blair and the Times and by looking more briefly at other disasters in the press, I have sought to bring the analysis of normal accident theory to journalism. Organizations prone to normal accidents have complicated interactivity among their operating units or systems. So do news organizations. Such organizations are closely coupled, as increasingly are news organizations, with their dependence on new technologies that are not thoroughly familiar to all who use them.

I have sought to show that as with organizations where normal accidents happen, news organizations experience the normalization of deviance. Often the beginnings of problems are masked by incubation periods, and relevant detail can be buried in a mass of irrelevant information. I have tried also to describe news organizations as what theorists call garbage can organizations, whose processes and technologies are often not clearly understood. And I have made the point, as do normal accident theorists, that safety measures often do more harm than good.

Does all this mean that news organizations are doomed to catastrophes, such as the Jayson Blair episode? Was there nothing the Times could have done to prevent this? Is there nothing it can now do to repair the damage? Nothing other news organizations can do to prevent something like this from happening to them?

The answer is an emphatic no.

Normal accidents arise out of the ways we construct and manage our news organizations. If we continue to tolerate the degradation of standards—by allowing, for example, the discipline of verification to yield to the allure of vivid detail and to the speed of treating every aspect of a story as breaking news—then the normalization of deviance will continue in newsrooms. Accuracy, the keystone of what we do, will be further devalued.

If we invest in media convergence without understanding the interactions of technology and the people who must make it work, we court dangers that lie in close coupling of systems. When we don’t see that newsroom values—such as narrative storytelling, diversity and decentralization of decision-making—can be thought of as operating systems, then we risk finding that our newsrooms have become so complicated that the interaction of everything in them becomes too difficult to track. The result: We will not apprehend the next disaster until it is upon us.

So let us end where Charles Perrow, the originator of normal accident theory, found his own conclusion:

“These systems are human constructions, whether designed by engineers and corporate presidents or the result of unplanned, unwitting, crescive, slowly evolving human attempts to cope. Either way they are very resistant to change. … But they are human constructions, and humans can deconstruct them or reconstruct them.

“The catastrophes send us warning signals.”

Think of Jayson Blair and The New York Times as a warning signal for journalism.

William F. Woo, a 1967 Nieman Fellow, teaches journalism at Stanford University and is a former editor of the St. Louis Post-Dispatch.