Facebook CEO Mark Zuckerberg testifies remotely during a House Judiciary subcommittee on antitrust on Capitol Hill in July 2020

Facebook CEO Mark Zuckerberg testifies remotely during a House Judiciary subcommittee on antitrust on Capitol Hill in July 2020

On September 28, a detailed investigation by Channel 4 News in the U.K. revealed that Trump’s election campaign had deployed an ad strategy it called “Deterrence.” The campaign bought ads on Facebook, using its microtargeting ad function, which profiled and categorized 3.5 million Black Americans in an effort to “cultivate hopelessness,” according to Channel 4’s report, and suppress votes.

That day, like every day since March when the coronavirus pandemic took hold, Shana Black was working from her home in Cleveland. Scrolling through Facebook, Twitter, and Reddit, she scanned her feeds, keying in search terms and words, looking for misinformation campaigns like the one Channel 4 News discovered. Census, mask-wearing, political yard signs, planned protest, the Presidential Debate, National Guard in Cleveland, reopening, voter registration…

With the 2020 election less than a month away, the threat of digital election interfe-    rence fueled by mis-, dis-, and mal-information is stark

Before the 2016 election, Shana Black was an educator. After the 2016 election, she became a journalist, launching a blog called Black Girl in the CLE, and, later, Black Girl Media, an independent media company designed to engage African-American women in Cleveland by spotlighting issues and topics important to her community “with hopes to get people more active by becoming change agents.” “We were the target of misinformation before we even had a word for it,” Black says about her decision. “My audience was targeted. I was targeted. No one like me was telling the community that this is a problem.”

In March 2020, she became one of five First Draft fellows working to address the problem. First Draft, a nonprofit coalition with a mission to combat online misinformation, launched the program in partnership with the Democracy Fund that month. Says Nancy Watzman, a consultant and the project’s manager, fellows serve as “embedded resources” for local media, building local collaborations and capacity and then circling back with their insights to First Draft’s national team, which, in turn, supports national outlets as they, too, work to staunch the flow.

“The need is huge,” says Watzman.

With the 2020 election less than a month away, the threat of digital election interference fueled by mis-, dis-, and mal-information is stark. There’s no shortage of examples, from fake Department of Homeland Security flyers in Wisconsin designed to intimidate immigrant and Latinx voters to political ads in Tennessee titled “Caravan” calling Central American migrants “gang members,” “known criminals,” and “possibly even terrorists.”

“Foreign actors are spreading disinformation and attempting to sway voters by executing influence operations,” United States National Counterintelligence and Security Center director William Evanina said on October 6. “They are using an array of cyber activities with the intent to gain access to our election infrastructure.” But according to a team of researchers at Harvard’s Berkman Klein Center for Internet and Society, the problem is also closer to home: “Donald Trump has perfected the art of harnessing mass media to disseminate and at times reinforce his disinformation campaign.”

In addition to her First Draft backing, Black, along with Serena Maria Daniels, a journalist, founder of Tostada Magazine, and Black’s First Draft counterpart in Detroit, have won funding from API’s Trusted Election Fund—Black to create a voter guide and a series of podcasts episodes for African-American voters and other communities of color in Cleveland; Daniels to create infographics and a guide to help BIPOC communities recognize signs of misinformation. They talk and DM every day, hoping to slow the spread of election mis-, dis-, and mal-information in their communities.

Related Reading

With mis- and disinformation campaigns heating up, a vacancy on the supreme Court, and a President who refuses to commit to accepting the results, the 2020 election arrives at a period of extraordinary uncertainty and tension. Nieman Reports and Nieman Lab are publishing a collection of stories exploring how newsrooms are covering this intensely contested vote and its aftermath.

How Not to Cover Voter Fraud
By Yochai Benkler, Nieman Reports

The election could be contested and last for weeks after Nov. 3. Here’s what experts think journalists should know.
By Sarah Scire, Nieman Lab

Critical Coverage: Cable News and Trump’s Covid-19 Diagnosis
By Celeste Katz Marston, Nieman Reports

On October 6, the Democratic Majority of the House Subcommittee on Antitrust, Commercial, and Administrative Law released its report on the anticompetitive conduct of Silicon Valley’s most prominent companies, including, among other actions, the recommended passage of H.R. 2054, the Journalism Competition and Preservation Act. “The companies that once were scrappy, underdog startups that challenged the status quo have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons,” the report read. “[T]hese firms wield their dominance in ways that erode entrepreneurship, degrade Americans’ privacy online, and undermine the vibrancy of the free and diverse press. The result is less innovation, fewer choices for consumers, and a weakened democracy.”

The research of Young Mie Kim, a professor at the School of Journalism and Mass Communication and a Faculty Affiliate of the Department of Political Science at the University of Wisconsin-Madison who studies election interference, demonstrates, among other findings, that Pennsylvania, Virginia, and Wisconsin were the most targeted states for ad-based social media mis- and disinformation campaigns in the 2016 election. “Focusing on voter suppression of already marginalized populations … is a very smart strategy,” says Kim. “It’s extremely difficult to convert Democrats to Republicans, Republicans to Democrats, even Independents. But if you think about already marginalized voters, who already have barriers to vote…” She pauses. “All you need to do is make sure they don’t come out.”

Targeted ad campaigns were racialized, Kim’s research shows, with one grouping of ads targeting white Americans with nationalist and anti-immigration messaging, another targeting Americans of color, pushing them to boycott the election. One out of every six of the groups behind the campaigns were Russian-linked; the rest, the report reads, “are ‘anonymous’ groups whose true identity is little known to the public.”

The threat in the U.S. is stark, and the challenge is pressing, and polarizing, on a global scale. In mid-September, an internal document written by Facebook data scientist Sophie Zhang was leaked to BuzzFeed. “I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry,” she wrote, in Azerbaijan, Honduras, India, Ukraine, Spain, Brazil, Bolivia, and Ecuador. Facebook did not act with malicious intent, according to Zhang, but, rather, operated in perpetual firefighting mode, and, critically, prioritized PR and profit over civic concerns.

On September 30, the “Real Facebook Oversight Board,” a coalition formed to fight disinformation, riffing on the title of Facebook’s own Oversight Board to make its point, held its first press conference, asking Facebook for three “immediate measures”: Enforce its own policies to remove content that incites violence; ban all advertising that predicts election results until the winner is decided and the other candidate concedes; and label all posts that include election results as untrue or unproven until the winner is decided and the other candidate concedes.

Later that evening, Facebook’s Liz Bourgeois, director of the platform’s strategic response communications, posted a tweet in response: “This is a stunt driven by our longtime critics,” she wrote. “Worth noting though that we’re already doing work in these areas. We’re labeling premature claims of victory and blocking ads with such claims. Just today we said we’ll be rejecting ads delegitimizing the election results.” Within a week, Facebook had banned the Real Facebook Oversight Board from its platform.

Joan Donovan, the research director of Harvard’s Shorenstein Center on Media, Politics and Public Policy and part of the Real Facebook Oversight Board, says, “Facebook’s inaction and profiteering are going to disrupt the election process. There has to be accountability around disinformation.” After a beat, she adds, “It’s like standing in quicksand.”

Local journalists, in particular, sometimes feel like they are sinking.

Facebook, Google, Twitter and Reddit reportedly are working with each other and with federal law enforcement and intelligence agencies to address threats to election integrity. Facebook and Google have launched programs to fund journalism and fact-checking worldwide, to the tune of hundreds of millions, though critics question their intent and motivation.

Real Facebook Oversight Board

A Zoom press conference of the "Real Facebook Oversight Board"

Last October, Twitter announced it would ban all political advertising. Last month, it updated its policies, announcing it will begin labeling and removing tweets deemed likely to cause immediate harm, notably tweets that include premature election results or content about election rigging.On October 9, Twitter announced more changes still, including additional warnings and restrictions on calls for violence or interference at polls, on claims of victory before the election is “authoritatively called,” and “on Tweets with a misleading information label from US political figures.” Starting this week, users who want to Retweet misleading information “will see a prompt pointing them to credible information about the topic before they are able to amplify it.” Reddit, meanwhile, has increased transparency around political ads.

In early September, Facebook announced it would ban all political ads the week before the election. Then, on October 7, the company announced it will “temporarily stop running all social issue, electoral, or political ads in the U.S. after the polls close on November 3, to reduce opportunities for confusion or abuse. We will notify advertisers when this policy is lifted.”

In a lengthy email, a Facebook spokesperson offered a granular description of additional measures the company is taking: “Content that doesn’t violate our voter suppression/interference policies, but may still discourage voting, will be labeled more emphatically. On posts that use COVID-19 to discourage voting, we will attach a link to our COVID-19 Information Center and prevent the content from being run in ads. We will attach an informational label to content that seeks to delegitimize the outcome of the election, for example, by saying lawful methods of voting may lead to fraud. The labels will provide factual information about the underlying claim—e.g., if the post is about mail-in voting and fraud, the label will include a factual statement about mail-in voting followed by a link to the Voting Information Center where people can get real-time updates on voting registration deadlines, requirements, etc. If any candidate or campaign tries to declare victory before results are in, we’ll add a label to their posts … noting that counting is still in progress and no winner has been declared. After a presidential winner is declared by a majority of major news outlets and the declared outcome is contested by another candidate or party, we will label posts from presidential candidates with the declared winner’s name and a link to our Voting Information Center.”

In summary, the spokesperson wrote, “We are banning all paid ads that mention presidential election results until a winner is declared. We’re also labeling all organic posts that include premature claims of victory.”

But many critics say Facebook and the other platforms need to be even more proactive. In Donovan’s words, they have “slacked off,” allowing dangerous misinformation to proliferate, putting already disproportionately vulnerable communities at the greatest risk of harm. “Social media is tooled to distribute financial scams, miracle-promising products and fear-mongering conspiracies alongside medical advice, school closures, news and family updates … Moderating content after something goes wrong is too late.”

While Facebook, the largest and most influential social media company, is understandably in the spotlight, the problem arguably is bigger than any one platform. Several of the men charged with conspiracy to kidnap Michigan Gov. Gretchen Whitman were active in what’s known as the boogaloo movement and radicalized online, through YouTube, Twitter, Reddit, Facebook, 4chan, and other fringe websites. The U.S.-based QAnon movement—proliferating globally, with more than 200,000 followers in Germany alone—was spawned from 4chan and 8chan (and its successor, 8kun) before moving to Facebook. And, as platforms tighten operations, researchers predict, bad actors will turn to other technologies and mechanisms to wreak havoc.

The problem is complex, requiring nuanced attention to questions of practice, policy, privacy, and free speech, and newsrooms themselves can also unwittingly amplify false information.

After a six-month investigation into the spread of Trump’s false claim that mail-in ballots were threatened by mass election fraud, and his threats to refuse to accept defeat for this reason, a team of researchers at Harvard’s Berkman Klein Center for Internet and Society concluded that, while platforms amplified mass election fraud disinformation, the mail-in ballot fraud campaign was, at its core, “waged primarily through mass media responding to false assertions from President Trump, his campaign, and the RNC.” In the four years since the 2016 election, in a dense and siloed feedback loop effect researchers have named ‘network propaganda,’ Trump has “perfected the art of harnessing mass media to disseminate and at times reinforce” disinformation.

The most important independent actors in the month before and the months after the election, the researchers concluded, “will be the editors and reporters of traditional mainstream media, in particular those who write and use syndicated news stories … the most important source of, and defense against, disinformation in American politics in the 2020 election.”

First Draft fellows are working in Colorado, Florida, Ohio, Michigan, and Wisconsin to augment local newsrooms in tackling misinformation. Every day, Watzman says, “the fellows check dashboards they’ve developed to track how online misinformation is spreading in their states.” Then, she said, they report back to the central office via Slack on what they are seeing. First Draft incorporates their insights for distribution to national and local media. Fellows do trainings and outreach in their local communities, teaching reporters and editors, for instance, how to determine whether a source is authentic, whether an image has been manipulated, or whether a source is an “influencer” with an agenda. Fellows instruct newsrooms how to recognize when misinformation spread has reached a “tipping point” and requires direct reporting in response, and how to collect high-quality data on issues that might flare up—the local ballot counting process, for instance—a practice known as “prebunking.”

While Facebook is understandably in the spotlight, the problem arguably is bigger than any one platform

“It is extremely helpful to have local expertise in interpreting hyper-local misinformation, which can be hard for someone sitting at the national level to understand,” Watzman says. “Context means a lot.”

Black says the deep dive into disinformation has left her “more watchful. I’m the target. I’m the black voter in Ohio … I’m getting texts from the Trump campaign.” Disinformation and misinformation isn’t going away after the November election, she says. Journalists and newsrooms have to adapt for the long term. In addition to her reporting, Black has developed an outreach list for local reporters, hosted a Jeopardy-style event for reporters to test their knowledge of how to track and combat online misinformation, and conducted training webinars for local reporters.

Sandra Fish, an independent data journalist and First Draft fellow working with the Colorado Media Project, is following militia groups in particular. There’s also a strong anti-vaxxer movement “that’s quickly morphing” into Facebook groups messaging that Covid-19 isn’t real. Fish has conducted some dozen webinars with Colorado reporters and is running a collaborative Q&A voting project. “Facebook has encouraged all these groups,” says Fish, who has joined many of them for tracking purposes. “There’s no policing. People are sharing bad information. Creative people can really sow disruption.”

In Madison, Wisconsin, reporter Howard Hardee is seeing the network propaganda effect play out first-hand, tracking a pattern of influencers with vast digital audiences planting misinformation that trickles down in comment threads across platforms. High-profile radio personalities and politicians seed false narratives with very little consequence, he says—disinformation about the National Guard at the polls, for instance, “trying to scare people, dressed in camo ready to fight, when they’re actually dressed like normal people doing normal poll work.”

After several months of seeing the pattern, he says, “I have a much better grasp on how to help fellow journalists in Wisconsin,” including free distribution of stories, one of which, setting the record straight on contact tracing conspiracies, was picked up by several dozen newspapers and digital outlets in Wisconsin. “A lot of our reporting is dwarfed by inflammatory posts,” he says, “but I was heartened to see we have a really wide audience and word is getting out there.”

Chad Lorenz, project director for Votebeat, a new Chalkbeat initiative funded by philanthropy and aided by the Institute for Nonprofit News, says that getting local journalists and newsrooms trained is critical. “Political forces are trying to undermine the credibility of the election and sow distrust among voters so they either choose not to participate or question the final results,” he says. “We have to correct that because voting is the key to our democracy.”

Propublica’s ElectionLand project is providing training to Votebeat’s reporters, who, Lorenz says, are focusing on systemic racism in the electoral process, glitches or meltdowns in polling places, partisan disputes over voting rules, problems in sending or receiving mail-in votes, voter intimidation at the polls, voting disinformation, and any reports of election officials not counting ballots fairly. “We also want to highlight successes and heroes who emerge from these scenarios,” he says. The stories, which are free and distributed through email blasts, “factually rebut accusations that the election is rigged and that fraud is widespread. We journalists provide a better service to the public when we share, collaborate, and coordinate coverage.”

Alex Stamos, who worked as the top security executive at Facebook from 2015 to 2018, before leaving during disagreement over the role of Russian interference in the 2016 election, advocates for collaboration as well. He is advising newsrooms to create and implement threat models to help them manage increasing risk of hacks and interference. But it’s not realistic to expect understaffed and under-resourced local newsrooms to do that work on their own, he says, even as there is some evidence that bad actors target resource-strapped local newsrooms: “I would love to see open efforts in this area by organizations like NBCUniversal, The New York Times or Washington Post, or perhaps a collaboration brought together by the Associated Press. Unlike traditional cyber threat modeling, the risks involved for each media organization are pretty closely shared, and I think there would be great value in the large organizations documenting their findings and responses for smaller orgs to adopt.”

None of this will help in communities where local journalism has disappeared altogether. “In places where there are no community moderation rules, misinformation thrives,” says Donovan.  When disinformation is hyper local, she says, people step into story, “ready to take on the role of policing”—an effect seen in Kenosha, WI, for example, after Facebook ignored 455 user flags reporting the potential danger of the Kenosha Guard’s event page, and removed the page only after a 17-year-old suspect allegedly shot and killed two protesters.

Facebook continues to scale without a clear plan to handle this social consequence, unveiling a decision to use algorithms to promote groups users don’t already belong to into their newsfeeds. On October 6, Facebook announced it had removed a post in which Trump falsely claimed that Covid-19 is less deadly than the seasonal flu, in violation of Facebook’s rules on Covid-19 misinformation, while reportedly continuing to allow Trump’s campaign to run ads targeting Florida, Pennsylvania, Ohio, Wisconsin, Michigan, North Carolina, Georgia, Iowa, Arizona, Minnesota, Maine, Nebraska, and Nevada claiming Trump is “immune” to Covid-19.

“We’re taking tiny baby steps” compared to 2016, says the University of Wisconsin-Madison’s Kim, and “we are even more polarized.” But public debate is taking hold. Just as calls are growing for collaboration among newsrooms in the U.S., calls are growing for collaboration among democracies worldwide—for effective regulatory controls, for industry standards for security and practices, increased transparency and accountability, unionizing, and more robust public research, among other proposals.

Every individual vote, and story, counts. How each one is informed, cast, and reported—the underlying structure—matters, too. “We can’t only be mad at Facebook,” says Black. Newsrooms routinely run ads through Facebook to reach more readers. “It’s a double-edged sword. We have to inform the community. Trying to change Facebook as individuals is like trying to change democracy as individuals. We have to change the process.”

Correction: An earlier version of this story failed to mention that Votebeat is funded by philanthropy; the Institute for Nonprofit News provides support.

Catherine Buni is an award-winning freelance writer and Type Investigations reporter covering online content moderation, risk, and security.

Further Reading

Show comments / Leave a comment