A Blueprint for Documenting and Debunking Misinformation Campaigns

Image for A Blueprint for Documenting and Debunking Misinformation Campaigns
People wearing face masks walk past a Twitter logo outside New York City headquarters
In 2020, amid a pandemic and protests and a presidential election, misinformation lays in wait everywhere. It’s on our social media feeds, coming out of the mouths of our politicians, and printed in pamphlets mailed to our doors, intermingling indistinguishably with facts. The World Health Organization has termed this an infodemic. Some of it is the result of intentional media manipulation campaigns, scams, hoaxes, and grifts cooked up by people with an agenda. This disinformation, like a virus, is contagious and potentially deadly—to individuals and democracy itself.

It didn’t start this way. The advent of online communication, and the vast possibility for connection that came with it, enabled people to find each other based on interest and affinity like never before, and new toolkits for those engaged in cultural production. Scientific innovators, advocacy groups, and independent media all flourished with new advances in networked communication and broadband technology, establishing their communities on the open web and social media.

But as the naivete of the techno-utopian era fades into the horrors of the infodemic, we now see platforms running defense after knowingly allowing radicalization to flourish. The direct harm caused by ransomware attacks on our vital institutions, the cyber-troopers of oppressive regimes, for-profit disinformation outfits, harmful conspiracy theories grounded in anti-Semitism and medical misinformation, and the celebration of extremist violence are breaking our institutions, which have little or no ability to identify the source of these attacks.

We at the Technology and Social Change team at Harvard’s Shorenstein Center on Media, Politics and Policy are publishing the Media Manipulation Casebook to help cut through this noise. The Casebook is a database of media manipulation campaign case studies, some old, some ongoing, that we hope will provide a framework to analyze this phenomenon. We intend this research platform to be both a resource to scholars and a tool to help researchers, technologists, policymakers, civil society organizations, and journalists know how and when to respond to the very real threat of media manipulation.

The different stages of the media manipulation life cycle


The heart of the Casebook is the media manipulation life cycle, which presents a methodology for how to understand the origins and impacts of media manipulation campaigns, both domestic and international, and their relation to the wider information ecosystem. Situated in the emerging field of Critical Internet Studies, it is the product of three years of research on how journalists, civil society groups, and technologists grapple with media manipulation and disinformation campaigns. We took seriously the need for a cross-sector set of definitions that help us make sense of the tactics of manipulators and the communication strategies they employ to hoax the public.

Here, we break down how each stage of the life cycle works, and the ways different groups of people trying to fight back can be most useful. Media manipulation affects not just journalists and social media companies but presents a collective challenge to all of us who believe that knowledge is power. Like a hammer in a world lined with nails, the Casebook offers a way of analyzing interactions across our media ecosystem that is consistent with current journalistic and research practices, which seek to get us closer to the truth.

Stage 1: Campaign Planning


Media manipulation campaigns are a product of our culture, and of Silicon Valley. As the tech industry’s wares spread globally, driven by a technocratic and profit-driven machine, so too were pre-existing social problems reproduced and amplified. In many of the media manipulation campaigns we catalogue in the Casebook, you see small groups of motivated actors, often driven by these toxic social forces, opportunistically using technology to scale and amplify their impact.

Establishing who these people are, and why they are acting, is extremely difficult. Social media platforms, prime targets for extremists and media manipulators, are increasingly opaque and difficult to study critically. This makes establishing intent and attribution for disinformation artifacts and harmful propaganda a time-consuming and emotionally taxing process for journalists and researchers. Behind every visible campaign plan is another layer of invisible communication to outsiders, another new platform to evade regulation and oversight.

But the opacity of content moderation regarding these materials makes external critical research and journalism such a necessary part of pressuring for change.

Uncovering evidence of campaign planning and coordination takes domain expertise, which takes time. This information may be gathered in real time by a dedicated watcher tasked with understanding the dynamics of online subcultural spaces but is often only available forensically. We know the extent to which the far right organized for Unite the Right because of chat leaks published by Unicorn Riots, for example. In our case studies, when possible, we illustrate what the beginning of a campaign looks like, and explain how other researchers and journalists can cultivate that domain expertise themselves. Our case studies on the fake Antifa social media accounts phenomenon and the digital blackface “Operation Blaxit” show how planning and coordination may be discoverable for those who know where to look.

Discovering campaign planning and establishing intent, are impossible without qualitative research that contextualizes how and why it was created. Rather than rely on anonymized large data sets handed out by these platforms or increasingly restrictive information access, our research methods incorporate ethnographic, sociological, and anthropological understandings of human communication to make sense of the mess. Included as part of our methodological package is “Investigative Digital Ethnography,” a guide for academics and journalists seeking to design social media research that leads to deep insight into communities targeted by disinformation, and those that reliably produce it. While there will also be another layer to a disinformation campaign that we cannot see, we as journalists and researchers must perform clear, reproducible research to collectively address the many online harms we face today.

Stage 2: Seeding the Campaign Across Social Platforms and Web


Stage 2 is when a campaign moves from planning to execution, when memes, hashtags, forgeries, and false or misleading information are seeded across social media, fringe news sites, blogs, and forums. Often aided by willing participants, online influencers, and networked factions, this stage documents the earliest point in which a campaign moves beyond its original creators. If the messaging and calls to action are attractive enough, the campaign grows, reaching new audiences who often have no idea as to the origins or motivations behind what they’re now seeing.

Intervention at this stage is not clear cut. At what point does one intervene? How egregious is the content? What is the likely outcome? Will intervention backfire? This is where civil society organizations (CSOs) play an important role. Because of their domain expertise and connections with individuals and groups who may be most affected by an ill-motivated influence operation, CSOs will not only know where to look but will have a better understanding of the vectors of attack, the wedge issues that will be exploited, and the context and nuance to discern what action (if any) should be taken. CSOs with the capacity to monitor such activities therefore become an invaluable actor in preventing a potentially dangerous influence operation from moving onto the next stage.

Often more technically savvy and quicker to action, CSOs can counter messaging before it reaches mainstream audiences, pre-bunk likely misconceptions about an issue, and agitate for platform response. Here, humor and creativity are assets activists can draw from in countering mis- and disinformation. Often the first to notice when something seems dubious, CSOs can be a trusted resource. Technology companies and researchers should also take notice, as the most effective interventions will likely involve all parties.

Stage 3: Responses by Industry, Activists, Politicians, and Journalists


Stage 3 of the life cycle model documents how highly visible people and organizations outside of a manipulation campaign react and respond to it. These individuals or institutions can be politicians, government agencies, celebrities, influencers, civil society organizations—or journalists. It is almost always after reactions by these people with cultural power that a manipulation campaign grows to its most visible and most dangerous. Stage 3 is a turning point. What happens during this critical period determines whether the campaign gets undue amplification and attention or sputters out in failure.

It’s at this stage that journalistic judgment is most important. Media manipulators crave attention. If the point of Stage 2 is to lay a trap around the internet to get that attention, Stage 3 is where the campaign snares it.

Journalists are often the ones to find those traps, as it is their job to seek out important information that the public needs to know. Journalists are on the hunt. And so they have to think of media manipulation campaigns like deadfalls, laid out around the internet for them to get caught up. When encountering evidence of a campaign that’s still in Stages 1 or 2, journalists must carefully balance the need to report on true events with the need not to fall prey to a manipulation campaign. Sometimes it is not in the public interest to report on nascent campaigns.

To determine whether reporting in Stage 3 will do more good than harm, journalists must ask themselves: Does this bit of media manipulation have the potential to cause real harm? Are influential people responding to and spreading it? Do many people seem to be falling for it, and adopting its harmful messaging? If the answer to these questions is yes, then reporting is warranted. If the answers are less clear, they must make the best judgment they can.

As a few of our case studies show, the worst thing journalists can do in Stage 3 is to report on a media manipulation campaign at face value, repeating the misinformation and framing of the campaign. In this case, journalists have been duped. That’s an obvious win for the manipulators.

But journalists can still amplify the manipulation campaign even if they get the reporting right, which makes Stage 3 extremely tricky. If the disinformation campaign is limping along on social media, an article in the mainstream press—even one accurately pointing out how fake or wrong the campaign is—could be the match that lights the fire under the operation.

In that situation, the right move may be to not write a story—to deploy strategic silence.

But if it’s too late for strategic silence—for example, because other news organizations are already amplifying it, or social media platforms are serving it up to huge audiences who are already acting on it, or high-profile people are already responding to it—then it’s well into Stage 3 and it’s appropriate and even necessary to report on it.

One way to think of this is: As journalists, you rarely want to kick off Stage 3. You only want to instigate Stage 3 with your reporting if a campaign has gained such a hidden viral popularity already, outside of mainstream view, that it is causing harm or will be imminently.

In this case, the most important thing to do is report critically. This means deploying “strategic amplification.” It means following the truth sandwich rubric—lead with what’s true, quickly debunk what’s false, and then return to what is known. What is known can be things like who is behind the campaign, where it was planned, who it hurts, and how it fits into the current news cycle and the media manipulation life cycle. Do not link to campaign operators’ handles and websites directly if you can avoid it. Don’t make it easy for readers to use your reporting as a way to find, spread, and join the campaign.

Journalists also have a crucial role to play in Stage 4: Mitigation. By Stage 4, a campaign has reached such a viral tipping point that a corrective in the media is clearly necessary. Whether that corrective is effective depends on the situation, but such reporting is always warranted because the campaign has reached a certain level of public awareness.

Stage 4: Mitigation


Once a campaign is amplified into public awareness, a host of stakeholders must act to mitigate its harms. Journalism plays a crucial role here, as well, actively fact-checking and debunking individual disinformation campaigns, to bring the actions and impacts of malicious actors on social media platforms to the attention of civil society, technologists, and policymakers.

As newsrooms adapted over the last four years to the normalization of misinformation on social media, they began regularly fact checking and debunking beats. Fact checkers have written thousands of articles debunking misinformation and conspiracies because they see how audiences are repeatedly targeted by sensational and scandalous content online. It’s a drain on resources, which could be much better spent on sustaining journalism rather than moderating content on platforms. Dedicated fact checks are a form of mitigation, dominating SEO results for confirmed manipulation campaigns.

Mitigation efforts often fall on civil society, which bears the long tail of manipulation over years as disinformation is dispersed. Journalists, public health and medical professionals, civil society leaders, and law enforcement personnel are bearing the true cost of responding to unrelenting misinformation.

The evidence they gather adds up and can help pressure platforms to change their systems or Terms of Service. Civil society coalitions, like Change the Terms, have pushed for years to force the platform companies into taking responsibility for the harms that proliferate on their sites. Content moderation shouldn’t be the job of civil society—or of the communities being harmed.

Platform companies are the ones who wield the power of content moderation in Stage 4. They can deplatform, remove content, ban terms—in short, they can pull the plug on media manipulation campaigns if they take the right actions at the right times. Deplatforming manipulators and hate-mongers works. But these mitigation efforts often come too late, like deplatforming white supremacists who planned the Unite the Right murderous event or the long, slow growth of the QAnon movement. An example of this from our Casebook is in the case of the targeted harassment of an alleged whistleblower, when some social media companies followed the lead of mainstream journalism and blocked the use of a specific name on their platforms to protect an individual from harm.

But platform companies often respond too late, or not at all. We know platforms like Facebook have knowingly allowed radicalization to fester with deadly results. Though they have policy departments focused on minimizing harm, and they have pledged over and over again to make their platforms a safe and equitable environment, they often do not take action until civil society and journalists have forced them to. Their disparate mitigation efforts are uncoordinated and unstandardized, allowing manipulators to leverage an asymmetrical media environment to execute attacks.

In the vacuum of regulation, we see repeatedly how platforms fail in the pursuit of brand protection, only acting when a campaign has ended or adapted. In January 2020, Facebook published a statement, “In the absence of regulation, Facebook and other companies are left to design their own policies. We have based ours on the principle that people should be able to hear from those who wish to lead them, warts and all.”

This reveals that in Stage 4, the missing power broker is regulators, who could create standardized rules for the platforms, but so far have largely abdicated that duty or found it too difficult.

Stage 5: Campaign Adaptation


As many of the cases in the Casebook reveal, despite some mitigation, media manipulation campaigns often find ways to continue. In Stage 5, campaigns adapt when possible, sometimes overnight, or over the course of several years, such as the case study about the digital blackface Operation Blaxit campaign, or the enduring Pizzagate conspiracy theory. Operators often are fully aware of the best ways to exploit sociotechnical systems, and often use anonymity to avoid attribution, and use edited materials and coded language to avoid automatic content flagging. While these individuals or groups may be outside of accountability, major social media platforms remain the major attack vector for such campaigns and bear the responsibility for curbing the impact of this behavior.

Successful platform mitigation is the only way to curb the impact of adaptation by manipulators. The “Plandemic film, which claimed the Covid-19 virus was deployed by powerful elites to create a new world order, went super viral in spring 2020.  It was taken down after receiving almost two million views. It still circulated on smaller video platforms. Before mitigation, this disinformation campaign operated publicly, even pre-announcing a follow-up film, “Indoctrination.” When that film launched, platforms were ready. By taking proactive action, major platforms did much to stem transmission of the documentary and were able to avoid a repeat of “Plandemic”s virality. As a result of cross-sector coordination, “Indoctrination” received far less attention. Motivated manipulators will continue to adapt, but without the amplification capabilities of social media at their disposal, their audiences are greatly diminished.

Keeping track of this ecosystem is hard. Campaigns are hard to find, difficult to identify while being seeded, a challenge for journalists, our institutions, and civil society. Platforms' unmotivated and uneven mitigation practices enable manipulator adaptation. But we at the Technology and Social Change project introduce this model, open to many disciplines and research practices, as a means to detect, document, and debunk misinformation in all its forms. It is a frame for policymakers seeking to understand the impact media manipulation has outside of platforms, and how those platforms are designed for continued exploitation. And we hope it is a blueprint for journalists and researchers seeking standards for how to approach the current information crisis.