On Shaming and Harassment: The Limits of Speech in the Digital World

By Michael W. Harris

Justine Sacco might be the unintentional poster child for our digital communications era. Over Christmas vacation in 2013, while travelling to South Africa, she tweeted a joke and then boarded an eleven hour flight from London to Cape Town. By the time she landed, the then director of corporate communications for IAC was in the middle of a public relations nightmare. Just the sort of thing she would normally be in charge of managing the fallout from. Despite her meager 170 Twitter followers, her tweet had resulted in a worldwide trending hashtag, a feverish watch of #HasJustineLandedYet, and an internet mob piling onto her simple, albeit incredibly insensitive and racist, joke of: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” The fact that she was completely unaware of what was going on in real time while her plane was traversing the length of the continent she had insulted with her tweet created the perfect storm for internet schadenfreude.

Since this instance of public shaming and the very real results of a person losing their job because of something they posted as a private citizen, the ability of the digital mob, and indeed its basest form of “trolls,” to enact change in the physical world has only grown more powerful. From running comedian Leslie Jones off of social media with unceasing harassment, a charge led by conservative media personality and internet provocateur Milo Yiannopoulos, to the doxxing of female game designers, journalists, and those who supported them during Gamergate, to even the halls the academia with recent controversies surrounding journal articles on provocative topics, the power of the internet masses has been clearly demonstrated. Now the question is if that power should be limited. Is the internet analogous to a public square? Does it fall under the First Amendment protections of free speech? Or does the fact that these forums are controlled by private companies mean that there is some responsibility that should be shared when that speech results in real world consequences like lost jobs or even physical attacks?

Free speech is a complex issue, and one that many people either misunderstand or purposefully misuse. To have free speech is not to be free from the consequences of your speech, and certain types of speech are regulated. As Lawrence Lessig writes in Code Version 2.0, “You cannot be jailed for criticizing the President, though you can be jailed from threatening him; you cannot be fined for promoting segregation, though you will be shunned if you do” (233). (The fact that this statement could actually be questioned in our current historical moment is something that I will just leave aside.) But where many of these free speech defenses to the harassment wrought by Yiannopoulos, the anonymous troll armies of 4Chan, or any other group fall short is on the question of what constitutes the public square, or commons, on the internet. A person voicing their opinion during a public forum, or protesting with a sign on a street corner is not 100% analogous with posting a tweet. Or is it?

The notion of the internet as “commons” is one that is hotly debated, especially right now as our society wrestles, again, with the issues over net neutrality and if our current President’s personal twitter account constitutes an official channel of communication (and if it does, then does it also fall under the Presidential Records Act). That Twitter is a private company providing a platform is not a question, and it is also not a question that the internet is an open source platform that, at its most idealistic, provides a democratizing voice to the people. But where is the limit when that power is provided by a private company and that speech does harm to people? Especially in an era where the internet is quick to act as judge and jury, and then pass sentence before the person even has a chance to respond? As the saying by C.H. Spurgeon goes, “A lie will go round the world while truth is putting its boots on.” In the digital era, I doubt that truth even has the chance to get out of bed.

*          *          *

In a 2010 New York Times Magazine article, Tom Downey wrote about China’s “human-flesh search engines,” or bands of on-line detective vigilantes who take it upon themselves to exact justice for those who they perceive to have gone unpunished, much as in the case of Justine Sacco. As Downey writes, their, “goal is to get the targets of a search fired from their jobs, shamed in front of their neighbors, run out of town.” But these on-line groups of amateur detectives, no longer limited to just China and no longer just go after targets to exact social justice, they also investigate real crimes.

One famous case was a Reddit thread that sprang up during the massive manhunt for the Boston Marathon bombers. Famously, the thread wildly misidentified people who were supposedly linked to the bombing, and while such speculations can be eventually corrected by legitimate investigators, for many, the tarnish to a reputation might never go away. This is the same type of bad information that has given rise to multiple false stories that have been repeated, and are still being recycled, by so many people on the extremes of the political spectrum. A lie, willful or otherwise, gets reported on the internet, picked up by multiple sites and personalities, repeated over social media, and by the time the retraction or proof that it is false comes out, either no one is paying attention or those who want to believe the story because it conforms to their world view dismiss the evidence—a confirmation bias. This is the essence of filter bubbles on social media.

The power of our social media gatekeepers is thus two-fold: they have the power to control what we see, and in doing so feed a loop of outrage that keeps us clicking on refresh or checking in more frequently for more, and they also have the power to limit our speech by exercising takedowns of posts and banning people from the platform. If these platforms were neutral platforms, actual public commons, then the sites could not exercise such control except in cases of threats of harm to people. Though on the internet, sometimes the line between threat and sarcasm is quite thin. And sometimes a sarcastic rant might end up with a SWAT team at your door, as was the case of Joe Lipari, who ended up with charges of terrorism leveled against him after he posted an ill-advised sarcastic paraphrase from the novel Fight Club in response to a bad experience at an Apple store. Even more terrifying is when an on-line disagreement ends with a “swatting,” as your rival, or a hired gun, calls in a phony threat and gets the SWAT sent to your, or someone else’s, door. Truly, it is a terrifying digital world out there.

*          *          *

If the case of the Liaparis and Saccos of the internet teach us anything, it is not to be an idiot on social media. This is similar to the people who post an ill-advised and vaguely insensitive photo of themselves doing something at a place some or many consider sacred or otherwise solemn. This type of speech should not be restrained as it does have the effect that Lessig speaks of when he wrote that “you cannot be fined for promoting segregation, though you will be shunned if you do.” But there is also something to be said of the amplifying effect that the internet has on such public shaming and shunning. Sometimes such amplifying effect is used for public good, as with rallying people to good causes or holding the powerful accountable, other times it harms and silences the very groups and voices that the internet once had the promise to lift up. The powerless, the underserved. Usually these groups are the ones targeted by the armies of trolls, many times aimed at minority voices by people like Yiannopoulos for the express purpose of silencing them. This is the dark side of public shaming, when shaming turns into harassment.

In his book So You’ve Been Publicly Shamed, author and journalist Jon Ronson wrote:

…These giants were being brought down by people who used to be powerless—bloggers, anyone with a social media account. And the weapon that was felling them was a new one: online shaming.

And then one day it hit me. Something of real consequence was happening. We were at the start of a great renaissance of public shaming. After a lull of almost 180 years (public punishments were phased out in 1837 in the United Kingdom and in 1839 in the United States), it was back in a big way. When we deployed shame, we were utilizing an immensely powerful tool. It was coercive, borderless, and increasing in speed and influence. Hierarchies were being leveled out. The silenced were getting a voice. It was like the democratization of justice.

However, the tools for “the democratization of justice,” as we have seen, are also used for the darker ends of harassment. For every example of an oppressed people using the tools of social media to organize revolution against authoritarian regimes there is an example of them being used to organize a white supremacist rally. For every case of women using them to speak out against sexual harassment and abuse there is a Gamergate. And cases like Justine Sacco tiptoe on the edge between public shaming and harassment. Maybe it is something in between the two ends: bullying. As Ronson describes in his piece on public shaming for The New York Times Magazine:

As time passed, though, I watched these shame campaigns multiply, to the point that they targeted not just powerful institutions and public figures but really anyone perceived to have done something offensive. I also began to marvel at the disconnect between the severity of the crime and the gleeful savagery of the punishment. It almost felt as if shamings were now happening for their own sake, as if they were following a script.

The digital mobs will feed on anyone they can find and harass them into submission. To punish them for a perceived wrong, many times ripped of any mitigating circumstances or context.

However, there are the times when the mob is aimed at targets because their speech is not offensive, but because someone disagree with it or even the mere fact that they are speaking out. Recently the podcast Radiolab, as part of their spinoff series More Perfect, which examines the Supreme Court and legal questions, hosted what they called “The Hate Debate.” This debate featured the show’s legal editor Elie Mystal debating Corynne McSherry, legal director of the Electronic Frontier Foundation, and Ken White, lawyer and founder of the popular legal blog Popehat, on the topic of regulating speech on-line. Part of the debate hinged on the question of if social media networks like Facebook and Twitter should be treated as “neutral” platforms in which the portal through which the information flows is not held responsible or is subject to legal regulation because of the speech of users. Should social networks be left to their own regulatory devices? Should they do nothing to control the armies of trolls who harass women and minorities and effectively silence minority voices? Should they do nothing to stop the organizing of white supremacists or other hate groups? Who should hold the power in these environments? The companies? The governments? The people?

Vox writer and commentator Carlos Maza, in a recent video for that site, says:

Twitter began as a radical experiment in free speech. But over time that experiment began to fall apart because the same features that made Twitter so attractive to citizen journalists and political dissidents also made it a perfect environment for trolls: neo-Nazis, white supremacists, and misogynists. These users realized they could use Twitter’s anonymity and structure to target and harass people they didn’t agree with.

And this is the crux of the problem of free speech: while it can allow a space for healthy debate and accountability, it also allows for the fermenting of socially toxic ideas. It is the same issue we have always wrestled with whenever the KKK holds a public march or the Westboro Baptist Church protests a soldier’s funeral. But where we could count on public pressure and shame to help control such hate groups in the physical world, the internet has handed these type of small groups a powerful tool to organize and harass while also hiding in plain sight. And sometimes the only tool available to counter them is for the internet to publicly shame those they can identify.

We cannot count on the companies who control these platforms to do anything because while many of them speak about being good corporate citizens, the bottom line for them is the financial bottom line and returning value to shareholders. I’ve quoted Stephen M. Feldman’s piece from the Marquette Law Review in this space previously, but it bears repeating:

The massive intermediary-MNCs [multinational corporations] therefore control and readily suppress online expression for their own purposes—profit. They have no principled concern for the First Amendment. If it is to their benefit (profit) to invoke the First Amendment, they will do so. If it is to their benefit (profit) to suppress expression, then they will do so. MNCs manipulate the First Amendment and channel individual freedoms for business purposes only.

In other words, companies will ban people like Milo Yiannopoulos only when doing so will help them remain profitable. They will act when the public relations cost-benefit analysis tilts in favor of censoring speech or taking other actions to limit hate speech and hate groups. To the corporations that hold the power of our internet speech platforms, these are not “free.” They are money making enterprises that they will provide to us for “free,” but they will only remain “free” if they can turn a profit from selling ad space and data to information brokers. Once the pendulum swings another way, the rules will change. The companies hold the power and they will limit and censor as it benefits their financial bottom line. Whether or not this is the right way to handle on-line speech is an open question, and whether or not this is legal is the looming question over the entire digital landscape.

*          *          *

The magazine Wired recently published an entire issue devoted to the “Free Speech Issue,” and so many of the articles within it really get to the tension that I also feel surrounding this debate. On the one hand, I don’t want Neo-nazis, misogynists, so-called Men’s Rights Activists, racists, terrorists, and basically anyone who would undermine democracy or use their speech to silence others to be able to have such a powerful platform. On the other hand, once we start…where do we stop? Once that first step is taken, that door open, once we make it okay to police some speech—and let’s be honest, speech here is but one step removed from thought—where and who decides the line?

One of the articles in the issue was devoted to the central question I posed here: should social media platforms be regulated under the first amendment? It is a hard question, and I recommend reading the entire article because it is a tricky legal argument, but I really don’t know what my answer would be if I had to give one. However, I think the more illuminating piece is on Cloudfare and their decision to stop DDoS protection for a prominent Neo-Nazi website. The piece really goes in-depth with the CEO wrestling with the philosophical issues surrounding how we finally made his decision to, this one time, stop providing services to the site. And, like him, while I would not myself or my company associated with such speech, if we start down this road what is the end point? And let’s be clear, I hate the “slippery slope” argument…but I can’t help but invoke for this issue.

I donate monthly to the ACLU, have ever since November 2016 for obvious reasons. And I have always supported the idea and decision that the ACLU made to defend the rights of the KKK to hold a march and other such contentious defenses. While I do not support the KKK, what they stand for, and their history of intimidation, lynching, and generally being horrible people, I was raised to believe that in America, everyone has the right to speak their minds. But the Internet has changed everything. The tools of speech that it gives us have now shown their dark side. The utopian ideals of the internet pioneers lay burning in a bonfire set by the trolls and Milos and #Gamergaters of the world, and the fire illuminates them as they dance gleefully around its corpse.

And I have no idea where we go from here.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.