I can’t imagine standing in the middle of a Walmart and having a fellow customer repeatedly calling me racial epithets as the manager silently watches from a corner of the store, politely refusing to intervene, for ill-defined reasons. It would undermine the Walmart brand if that scenario played itself out too frequently.
Neither can I envision a Walmart manager stepping in to rescue me from a fellow customer making a joke about my thinning hair, or at least not kicking my harasser out of the store. But the truth is, when Walmart decided to open its doors and welcome in the public, it took on the responsibility to find the best way to police such interactions, just as members of the public willingly subject themselves to Walmart’s rules.
Given the growing intimacy of the online world, that scenario isn’t far removed from the dilemma faced by the creators of social media sites such as Facebook, YouTube, and Twitter and the users willingly interacting in a veritable public square created by private corporations. While each company has a set of rules to tamp down on vulgar, demeaning, and other kinds of harmful discourse, there is no easy way to decide what is out of bounds and what goes right up to the edge without going over.
When it comes to social media, we are in the “I know it when I see it” wilderness, and people’s perceptions about what is abusive or not is often colored by emotion and background as much as tangible, discernible fact. To one, an incident is about demonstrating proper respect for the principle of free speech; to another, it is a license to do untold harm.
The issue came to a head recently when a Vox.com reporter tweeted a link to a video highlighting how he had been constantly harassed on YouTube by a popular right-wing provocateur. YouTube’s response—to make it harder for the provocateur to monetize some of the offensive videos but not kick him off the site—seemed to satisfy no one. Besides, why not let that Vox reporter, who is not among the powerless, given that he has a big platform and a bevy of top journalists willing to defend him, handle the situation himself?
Here’s another truth: Even if YouTube had conjured up the perfect solution, its response would still seem wanting. That’s because the social media landscape is less about the First Amendment (except in instances of egregious government interference) and more about a dilemma caused by a collision of the private and public that can’t be neatly resolved, no matter how many times guardians of free speech falsely claim that to ban anyone is censorship or those subject to demeaning language cry foul when their tormentors aren’t banished.
There are no tidy answers, no clear lines to be drawn, only ones that must perpetually be redrawn based on an ever-evolving set of norms that will forever be difficult to foresee.
No one would bat an eye if a Walmart supervisor stepped in to rescue me from a flood of racial epithets and other kinds of demeaning language, even if my harasser was someone I could easily pummel, because power dynamics are not the only thing that matters. If I’m left to handle my harasser, I’m likely to make the situation worse by either responding with inflamed passions or leaving the store permanently and urging others to do the same, a bad outcome for all involved.
Walmart, in many American communities, particularly in the South, is the largest employer and a daily stop for those in the middle-class and others struggling to make ends meet, and others still who show up because it’s become a kind of a ritual or habit. Though there are other (less desirable) options, that makes it feel indispensable in the way Twitter and Facebook feel to their most frequent users.
Still, no one would say it would set a bad precedent if my harasser was kicked out of Walmart—common decency would demand it, as would common sense—even though free speech guardians reflexively say it is a form of censorship when it happens to a Twitter user.
They are right to be concerned, though, that such moves could become a kind of tax on free expression, leading social media users to self-censor, demand the removal from public spaces of any speech with which they disagree, or use the threat against the weakest among us, a perpetual concern any time government or large corporations are asked to set up boundaries that guide how individuals interact.
And given the exponential growth of such spaces over the past decade, it will be exceedingly difficult for the likes of Facebook and Twitter to rid themselves of the true harassers without at least occasionally jeopardizing those who might falsely be labeled as a harasser by an algorithm, just as the Walmart manager is likely to get it wrong every now and again, no matter how well they are trained. (YouTube mistakenly removed education videos about Nazis in its response to the complaint from the Vox reporter but restored them.)
But social media giants play a more important role in policing areas where people frequently interact than your average Walmart manager. Their platforms were used—are being used—by hostile forces to undermine our elections and democracy. We can debate how effective or ineffective those efforts have been; nevertheless, the threat is real and growing, as laid out in the Mueller report. For that reason alone, we must demand better of them.
They must find the right formula to excise truly harmful behavior and content while leaving it up to us to navigate the rest. If they can’t adequately uphold that responsibility, they must relinquish their enormous power. The stakes are too high to settle for anything less.
Neither can I envision a Walmart manager stepping in to rescue me from a fellow customer making a joke about my thinning hair, or at least not kicking my harasser out of the store. But the truth is, when Walmart decided to open its doors and welcome in the public, it took on the responsibility to find the best way to police such interactions, just as members of the public willingly subject themselves to Walmart’s rules.
Given the growing intimacy of the online world, that scenario isn’t far removed from the dilemma faced by the creators of social media sites such as Facebook, YouTube, and Twitter and the users willingly interacting in a veritable public square created by private corporations. While each company has a set of rules to tamp down on vulgar, demeaning, and other kinds of harmful discourse, there is no easy way to decide what is out of bounds and what goes right up to the edge without going over.
When it comes to social media, we are in the “I know it when I see it” wilderness, and people’s perceptions about what is abusive or not is often colored by emotion and background as much as tangible, discernible fact. To one, an incident is about demonstrating proper respect for the principle of free speech; to another, it is a license to do untold harm.
The issue came to a head recently when a Vox.com reporter tweeted a link to a video highlighting how he had been constantly harassed on YouTube by a popular right-wing provocateur. YouTube’s response—to make it harder for the provocateur to monetize some of the offensive videos but not kick him off the site—seemed to satisfy no one. Besides, why not let that Vox reporter, who is not among the powerless, given that he has a big platform and a bevy of top journalists willing to defend him, handle the situation himself?
Here’s another truth: Even if YouTube had conjured up the perfect solution, its response would still seem wanting. That’s because the social media landscape is less about the First Amendment (except in instances of egregious government interference) and more about a dilemma caused by a collision of the private and public that can’t be neatly resolved, no matter how many times guardians of free speech falsely claim that to ban anyone is censorship or those subject to demeaning language cry foul when their tormentors aren’t banished.
There are no tidy answers, no clear lines to be drawn, only ones that must perpetually be redrawn based on an ever-evolving set of norms that will forever be difficult to foresee.
No one would bat an eye if a Walmart supervisor stepped in to rescue me from a flood of racial epithets and other kinds of demeaning language, even if my harasser was someone I could easily pummel, because power dynamics are not the only thing that matters. If I’m left to handle my harasser, I’m likely to make the situation worse by either responding with inflamed passions or leaving the store permanently and urging others to do the same, a bad outcome for all involved.
Walmart, in many American communities, particularly in the South, is the largest employer and a daily stop for those in the middle-class and others struggling to make ends meet, and others still who show up because it’s become a kind of a ritual or habit. Though there are other (less desirable) options, that makes it feel indispensable in the way Twitter and Facebook feel to their most frequent users.
Still, no one would say it would set a bad precedent if my harasser was kicked out of Walmart—common decency would demand it, as would common sense—even though free speech guardians reflexively say it is a form of censorship when it happens to a Twitter user.
They are right to be concerned, though, that such moves could become a kind of tax on free expression, leading social media users to self-censor, demand the removal from public spaces of any speech with which they disagree, or use the threat against the weakest among us, a perpetual concern any time government or large corporations are asked to set up boundaries that guide how individuals interact.
And given the exponential growth of such spaces over the past decade, it will be exceedingly difficult for the likes of Facebook and Twitter to rid themselves of the true harassers without at least occasionally jeopardizing those who might falsely be labeled as a harasser by an algorithm, just as the Walmart manager is likely to get it wrong every now and again, no matter how well they are trained. (YouTube mistakenly removed education videos about Nazis in its response to the complaint from the Vox reporter but restored them.)
But social media giants play a more important role in policing areas where people frequently interact than your average Walmart manager. Their platforms were used—are being used—by hostile forces to undermine our elections and democracy. We can debate how effective or ineffective those efforts have been; nevertheless, the threat is real and growing, as laid out in the Mueller report. For that reason alone, we must demand better of them.
They must find the right formula to excise truly harmful behavior and content while leaving it up to us to navigate the rest. If they can’t adequately uphold that responsibility, they must relinquish their enormous power. The stakes are too high to settle for anything less.