In a handbook for aspiring journalists published in 1894, Edwin L. Shuman shared what he called one of the “most valuable secrets of the profession at its present stage of development.”
He revealed that it was standard practice for reporters to invent a few details, provided the made-up facts were nonessential to the overall story.
“Truth in essentials, imagination in nonessentials, is considered a legitimate rule of action in every office,” he wrote. “The paramount object is to make an interesting story.”
It was easy for a reporter of the time to get away with a few, or even a bushel of, inventions. Information was scarce and could take days or weeks to make its way to the public sphere. The telephone was not yet widely in use, and the first transatlantic wireless transmission was years away. The early mass-market Kodak Brownie camera was close to a decade from release. The machinery of publishing and distribution was in the hands of a few.
If a reporter wanted to fudge a few details to make his story a little more colorful, well, chances are no one would notice or call him on it.
Shuman’s advice is objectionable, but something about it—and the information and reporting environment in which it was offered—seems quaint and charming by today’s standards.
It also highlights how much things have changed when it comes to accuracy and verification. “Not too long ago, reporters were the guardians of scarce facts delivered at an appointed time to a passive audience,” writes Storyful founder Mark Little in his essay in this issue. “Today we are the managers of an overabundance of information and content, discovered, verified and delivered in partnership with active communities.”
This photo taken by a passenger after the London subway bombings in 2005 was one of the first images from the scene. It jump-started efforts at both the BBC and The Associated Press to solicit and verify user-generated content. Photo by Alexander Chadwick/AP.
A reporter following Shuman’s advice today would likely find his fabrications swiftly exposed on social media. Bloggers would tally offenses and delve deeper. People with firsthand knowledge of the story in question might step forward with photos and videos to contradict the invented details. Media watchdogs, press critics, and others would call out the reporter and his employer.
Never before in the history of journalism—or society—have more people and organizations been engaged in fact checking and verification. Never has it been so easy to expose an error, check a fact, crowdsource and bring technology to bear in service of verification.
Not surprisingly, the price for inaccuracy has never been higher. The new world of information abundance, of real-time dissemination, of smartphones and digital cameras and social networks has brought the discipline of verification back into fashion as the primary practice and value of journalists.
It has also necessitated an emerging area of expertise built around verifying photos, videos, tweets, status updates, blog posts, and other digital ephemera. I often call this the New Verification. But that’s not to say old values and skills aren’t still at the core of the discipline.
“The business of verifying and debunking content from the public relies far more on journalistic hunches than snazzy technology,” writes David Turner in his article about the BBC’s User Generated Content Hub. “While some call this new specialization in journalism ‘information forensics,’ one does not need to be an IT expert or have special equipment to ask and answer the fundamental questions used to judge whether a scene is staged or not.”
The Hub employs a dedicated team of journalists to verify (and debunk) content from social media. Al Jazeera’s social media team practices verification as a core part of its work, as does a team of producers at CNN’s iReport platform for citizen content. The Associated Press’s photo desk also dedicates significant time and resources to sourcing and verifying photos and videos from social networks.
At Storyful, Little, a former television reporter, and a team of journalists around the world operate a news organization that offers verification as one of its core services for customers such as Reuters and The New York Times.
Imagine: An outsourced verification operation, focused on vetting and curating social media content uploaded and shared by people the world over. It’s a news organization and a business model that would have been inconceivable 10 years ago.
Rumors and Lies
The complexity of verifying content from myriad sources in various mediums and in real time is one of the great new challenges for the profession. This content can provide critical information during conflicts and natural disasters and provide clarity and color to a local event.
But it also takes the form of fraudulent messages and images engineered by hoaxers, manipulators and propagandists. Rumors and falsehoods spread just as quickly as, if not faster than, facts. In many cases they prove more compelling, more convincing, more clickable.
“People seem to find it easier to believe rumors that they wish were true or that seem to fulfill a desire to hear the worst,” writes Tuscaloosa (Ala.) News city editor Katherine K. Lee in her essay about the News’s Pulitzer Prize-winning tornado coverage.
Lee’s experience speaks to a fundamental, if depressing, truth about humans and facts: just because something is true, it doesn’t mean people are more likely to believe it. Facts alone are not enough to persuade, to change minds.
Liars and manipulators are often more persuasive than the press, even with our growing cadre of checkers and verification specialists. Reporting and checking the facts isn’t the same as convincing people of them.
This is one of the battles being fought in the shift from information scarcity and tight distribution to information abundance and media fragmentation. As I’ve previously written, “the forces of untruth have more money, more people, and … much better expertise. They know how to birth and spread a lie better than we know how to debunk one. They are more creative about it, and, by the very nature of what they’re doing, they aren’t constrained by ethics or professional standards. Advantage, liars.”
Researchers Brendan Nyhan of Dartmouth College and Jason Reifler of Georgia State University have in recent years provided evidence that those working to spread lies large and small have a distinct advantage: the human brain.
“Unfortunately, available research in this area paints a pessimistic picture: the most salient misperceptions are typically difficult to correct,” the pair wrote on the Columbia Journalism Review’s website earlier this year. “This is because, in part, people’s evaluations of new information are shaped by their beliefs. When we encounter news that challenges our views, our brains may produce a variety of responses to compensate for this unwelcome information. As a result, corrections are sometimes ineffective and can even backfire.”
Humans resist correction and are disinclined to change closely held beliefs. We seek out sources of information that confirm our existing views. When confronted by contrary information, we find ways to avoid accepting it as true. We are governed by emotion, not by reason. (Read more about these factors in Nyhan and Reifler’s “Misinformation and Fact-checking: Research Findings From Social Science,” a paper written for New America Foundation’s Media Policy Initiative.)
These truths about human behavior help explain why political misinformation is so pervasive and effective and why myths and falsehoods take hold in society. The emergence of moneyed Super PACs promise an election year lousy with misleading ads, nasty e-mail campaigns, and manufactured lies.
Bad actors also make use of Twitter and other networks to create fake accounts that spread untruths or inject fraudulent chatter into the conversation. In dictatorships, they create fake videos and images and upload them to YouTube and other websites in the hope that news organizations and the public will find them and take them for real.
There is no shortage of work for fact checkers and the emerging verification experts within news organizations. But along with checking and vetting, we must also make the product of this work more persuasive and shareable.
Spreading facts requires the use of narrative, powerful images and visualization, and appeals to emotion. We must engage readers in ways that help them get past their biases. It also requires that we dedicate ourselves to spreading the skills of verification and fact checking within journalism—and to the public as a whole.
A public with the ability to spot a hoax website, verify a tweet, detect a faked photo, and evaluate sources of information is a more informed public. A public more resistant to untruths and so-called rumor bombs. (Think “death panels.”) This is a public that can participate in fact checking, rather than merely be an audience for it.
Fact checking and verification are having a moment right now. But what matters is whether this is a flash or a turning point—whether all the effort being put into fact checking and verification can have a measurable effect on the persistence of misinformation and lies in our new information ecosystem.
I’d hate for a journalist to dig up this issue decades or a century in the future and marvel at our foolishness the way we did about Mr. Shuman and his great secret of 19th-century journalism.
Craig Silverman writes the Regret the Error blog about accuracy, errors and verification for the Poynter Institute, where he is an adjunct faculty member. He is the author of “Regret the Error: How Media Mistakes Pollute the Press and Imperil Free Speech.”