Protesters chant anti-Mursi and anti-Muslim Brotherhood slogans as they wait in Tahrir Square in Cairo ahead of a public address by President Mohamed Mursi in June 2013. Social media helps movements like the Egyptian revolution gather force, but it didn't bring agreement on a new direction for the country

Protesters chant anti-Mursi and anti-Muslim Brotherhood slogans as they wait in Tahrir Square in Cairo ahead of a public address by President Mohamed Mursi in June 2013. Social media helps movements like the Egyptian revolution gather force, but it didn't bring agreement on a new direction for the country

Inc. magazine dubbed web guru Tim O’Reilly “The Oracle of Silicon Valley.” Wired magazine called the founder and CEO of the O’Reilly Media publishing and conference business “The Trend Spotter.”  So when one of the smartest thinkers on technology writes about the future, people are bound to pay attention. In “WTF: What’s the Future and Why It’s Up To Us,” being published by Harper Business on October 10, O’Reilly examines the challenges of solving some of the thorniest problems of the online world. An excerpt:

“WTF: What’s the Future and Why It's Up to Us” by Tim O’Reilly (Harper Business)

“WTF: What’s the Future and Why It's Up to Us” by Tim O’Reilly (Harper Business)

Investor and philanthropist George Soros has pointed out that there are things that are true, things that are false, and things that are true or false only to the extent that people believe in them. He calls this “reflexive knowledge,” but perhaps the old-fashioned term beliefs will serve just as well. So much that matters falls into this category—notably history, politics, and markets. “We are part of the world we seek to understand,” Soros wrote, “and our imperfect understanding plays an important role in shaping the events in which we participate.”

This has always been the case, but our new, world-spanning digital systems, connecting us into a nascent global brain, have accelerated and intensified the process. It is not just facts that spread from mind to mind. It is not just the idea that pots containing decaffeinated coffee should be orange. Misinformation goes viral too, shaping the beliefs of millions. Increasingly, what we know and what we are exposed to are shaped by personalization algorithms, which try to pick out for us from the fire hose of content on the Internet just the things that the algorithms expect we will most likely respond to, appealing to engagement and emotion rather than to literal truth.

In addition to sharing content that confirms their biases and framing it to serve their agendas, users are too eager for clicks and likes. One of the simplest algorithmic interventions Facebook and Twitter could make would be to ask people, “Are you sure you want to share that link? You don’t appear to have read the story.”

Krishna Bharat, the Google engineer who founded and ran Google News for many years, believes that one of the most important roles for algorithms to play may be as a kind of circuit breaker, which pauses the spread of suspicious postings, providing “enough of a window to gather evidence and have it considered by humans who may choose to arrest the wave before it turns into a tsunami.” Bharat points out that it is not every false story that needs to be flagged, only those that are gaining momentum. “Let us say that a social media platform has decided that it wants to fully address fake news by the time it gets 10,000 shares,” he notes. “To achieve this, they may want to have the wave flagged at 1,000 shares, so that human evaluators have time to study it and respond. For search, you would count queries and clicks rather than shares and the thresholds could be higher, but the overall logic is the same.”

If Facebook is indeed able to make progress in strengthening forms of positive engagement that actually create communities with true social capital, and is able to find an advertising model that supports that goal rather than distorts it, that would likely have a greater impact than any direct attempt to manage fake news. When tuning algorithms as in ordinary life, it is always better to tackle root causes than symptoms. Humans are a fundamentally social species; the tribalism of today’s toxic online culture may be a sign that it is time to reinvent all of our social institutions for the online era.

In our conversation on the topic, Margaret Levi, director of Stanford University’s Center for Advanced Study in the Behavioral Sciences, offered a concluding warning: “Even when social media helps people engage in collective action—as it did in Egypt—by coordinating them, that is quite distinct from an ongoing organization and movement.” This is what our mutual friend, Wael Ghonim, had learned as a result of his experience with the Egyptian revolution. “Unanswered still,” Margaret continued, “is Wael’s concern about how you transform coordinated and directed action to a sustained movement and community willing to work together to solve hard problems. Especially when they begin as a heterogeneous set of people with somewhat conflicting end goals. They may agree on getting rid of the dictator, but then what?”

Because many of the ad-based algorithms that shape our society are black boxes—either for reasons like those cited by Adam Mosseri, Facebook’s vice president of News Feed, or because they are, in the world of deep learning, inscrutable even to their creators—the question of trust is key. Facebook and Google tell us that their goals are laudable: to create a better user experience. But they are also businesses, and even creating a better user experience is intertwined with their other fitness function: making money.

Evan Williams has been struggling to find an answer to this problem. When he launched Medium, his follow-up to Twitter, in 2012, he wrote, rather presciently as it turned out: “The current system causes increasing amounts of misinformation … and pressure to put out more content more cheaply—depth, originality, or quality be damned. It’s unsustainable and unsatisfying for producers and consumers alike. … We need a new model.”

In January 2017, Ev realized that despite Medium’s success in building a community of writers who produce thoughtful content and a community of readers who value it, he had failed to find that new business model. He threw down the gauntlet, laid off a quarter of Medium’s staff, and committed to rethink everything it does. He had come to realize that however successful, Medium hadn’t gone far enough in breaking with the past. He concluded that the broken system is ad­-driven Internet media itself. “It simply doesn’t serve people. In fact, it’s not designed to,” he wrote. “The vast majority of articles, videos, and other ‘content’ we all consume on a daily basis is paid for—directly or indirectly—by corporations who are funding it in order to advance their goals. And it is measured, amplified, and rewarded based on its ability to do that. Period. As a result, we get … well, what we get. And it’s getting worse.”

Ev admits he doesn’t know what the new model looks like, but he’s convinced that it’s essential to search for it. “To continue on this trajectory,” he wrote, “put us at risk—even if we were successful, business­wise—of becoming an extension of a broken system.”

It is very hard to repair that broken system without rebuilding trust. When the algorithms that reward the publishers and platforms are at variance with the algorithms that would benefit users, whose side do publishers come down on? Whose side do Google and Facebook come down on? Whose black box can we trust?

Why don’t Facebook and Twitter ask: “Are you sure you want to share that link? You don’t appear to have read the story.”

There’s an irony here that everyone crying foul about the dangers of censorship in response to fake news should take deeply to heart. In 2014, Facebook’s research group announced that it had run an experiment to see whether shifting the mix of stories that their readers saw could make people happy or sad. “In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed,” the researchers wrote. “When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.”

The outcry was swift and severe. “To Facebook, we are all lab rats,” trumpeted the New York Times.

Think about this for a moment. Virtually every consumer-facing Internet service uses constant experiments to make their service more addictive, to make content go viral, to increase its ad revenue or its e-commerce sales. Manipulation to make more money is taken for granted, its techniques even taught and celebrated. But try to understand whether or not the posts that are shown influence people’s emotional state? A disgraceful breach of research ethics!

There is a master algorithm that rules our society, and, with apologies to author and computer science professor Pedro Domingos, it is not some powerful new approach to machine learning. It is a rule that was encoded into modern business decades ago, and has largely gone unchallenged since.

It is the algorithm that led CBS chairman Leslie Moonves to say in March 2016 that Trump’s campaign “may not be good for America, but it’s damn good for CBS.”

You must please that algorithm if you want your business to thrive.

From the book “WTF: What’s the Future and Why It’s Up to Us” by Tim O’Reilly. Copyright © 2017 by Tim O’Reilly. Published on October 10, 2017 by Harper Business, an imprint of HarperCollins Publishers. Reprinted by permission.

Further Reading

Show comments / Leave a comment