Jillian C. York
Electronic Frontier Foundation
doi: 10.15307/fcj.mesh.008.2015
When popular technologies are being used to work against people, it is natural we look for solutions. But what if there is no perfect solution? In this article he Director for International Freedom of Expression at the Electronic Frontier Foundation, Jillian York, examines how social media harassment leads to complicated frictions between free speech and the protection of other basic human rights.
Don’t feed the trolls.
Sticks and stones may break my bones, but names will never hurt me.
They’re just words.
These are just a few of the things said to individuals—particularly women—who speak out about harassment they’ve experienced online. Time and time again, they are told to simply ignore it, to clamp down on their own privacy settings, or worse, that online harassment or stalking isn’t real harassment or stalking.
The problem appears to be getting worse. As we increasingly spend large portions of our lives on social media platforms, it would be unwise to ignore the architecture of these services and the role they play in facilitating communication. The style of conversation enabled by a given platform can have unintended effects: for example, Twitter’s anonymous, public, short-message format has been accused of fostering anger and harassment.
In light of this, discussions about harassment have come to the fore in recent months. Organisations such as the one I work for, the Electronic Frontier Foundation, have struggled to find the right answer in addressing this problem. While preserving free speech for all has traditionally meant fighting back against censors—be they governments or corporations—harassment itself can and does act as a censor, forcing its victims offline out of fear or exhaustion.
In an October 2014 article in the Atlantic, authors Catherine Buni and Soraya Chemaly argued that, by allowing certain offensive content to remain available on social media in the name of free speech, social media companies are effectively aiding in the silencing of women. In this same article, the following citations were made from a piece I had earlier written on the same subject : ‘The question is not can Facebook censor speech, but rather, should it?’ given that any censorship of content ‘sets a dangerous precedent for special interest groups looking to bring their pet issue to the attention of Facebook’s censors.’ The authors of the piece criticised my use of the term ‘pet issue’ to describe a problem that affects half the world’s population. In fact, I was not describing violence against women as a pet issue, but rather the Pandora’s Box that has opened in response to allowing certain groups to assist companies in the policing of speech.
Twitter recently invited the Women, Action, and the Media (WAM!) network to assist the company to more effectively monitor and parse through reports of harassment—presumably to help define what constitutes harassment within the platform. Following their initiation to the process and further media attention toward the harassment problem, Twitter convened several meetings with advocacy groups. Among the groups invited were EFF, as well as the Anti-Defamation League (ADL), an organisation with the stated goal of ‘combating hate.’ In practice, however, the ADL has long been implicated in attempts at censoring critics of Israel, particularly on college campuses.
The concern I share with colleagues at EFF is not that eliminating images encouraging rape will result in a slippery slope; rather, that special interest groups will persuade Twitter to do their bidding, resulting in the platform looking more like Facebook—which seeks to create a ‘family-friendly space’ by censoring heavily—than ‘the free speech wing of the free speech party,’ as Twitter’s general manger once billed the site (quoted in: Halliday 2012).
The plan set forth by Twitter and WAM! will undoubtedly work in the short term, as Twitter removes accounts set up to harass individuals; but the ease with which one can set up a new account on the platform means that dedicated harassers won’t be gone for long.
Other solutions proposed thus far have focused less on treating the cause rather than the symptoms. There have been proposals to eliminate anonymity from the Internet, proposals to proactively censor certain words on social media platforms and proposals to prosecute harassers more harshly. There have been countless calls for social media companies to ‘do something!’ without much clarity into what ‘something’ might be. A problem with many of these solutions is that, in addition to potentially curbing harassment, they will surely have a chilling effect on other speech. For example, eliminating anonymity to stop harassment would also eliminate anonymity for those who need it most: such as, human rights defenders, victims of gendered violence, or young people.
The more prudent solutions have revolved around providing tools that give individuals more tools to report and blocking individuals. One such tool is BlockTogether, which gives individuals more control over who can follow or interact with them. BlockTogether allows users to create group blocklists that can be shared with others. Twitter is making its own dashboard easier to use, making the process of blocking a bit easier. These solutions allow users to feel safer without risking censorship, but may not go far enough in many cases.
These tools are not without their critics, who argue that mass blocking could have adverse effects if, for example, an individual were mistakenly placed on a block list that was then shared widely.
It seems, then, that there isn’t a solution that will both ensure the protection of free speech while also ensuring that all individuals feel comfortable expressing themselves. And, in order to accomplish the latter, it seems increasingly likely that we will have to compromise on the former.
Harassment must never be tolerated, but suppressing it—rather than countering it—will not swiftly rid us of the problem, and is likely to create new ones. Concerns about who regulates speech are valid in a time when our major platforms for expression are designed and controlled by U.S. corporations governed by men. The agendas of carceral states must also be a concern, so that the pursuits of harsh, punitive remedies are not encouraged. Online harassment is a complex problem with no clear solutions, and this should be our starting point as we seek to address it.
Author Biography
Jillian C. York is a writer and activist focused on the intersection of technology and policy. She currently serves as Director for International Freedom of Expression at the Electronic Frontier Foundation and is a Fellow at the Centre for Internet & Human Rights in Berlin.
Acknowledgement
I’d like to thank Soraya Chemaly and Jaclyn Friedman for making me think harder about this problem, and my colleagues at EFF for being supportive in that process.
References
- Buni, Catherine and Soraya Chemaly. ‘The Unsafety Net: How Social Media Turned Against Women,’ The Atlantic, 9 October, (2014) https://www.theatlantic.com/technology/archive/2014/10/the-unsafety-net-how-social-media-turned-against-women/381261/
- Halliday, Josh. ’Twitter’s Tony Wang: ‘We are the free speech wing of the free speech party’, The Guardian, 23 March, (2012) https://www.theguardian.com/media/2012/mar/22/twitter-tony-wang-free-speech
- Nussbaum, Martha C. ‘Haterz Gonna Hate?’ The Nation 5 November, (2014) https://www.thenation.com/article/haterz-gonna-hate/
- Owens, Simon. ‘The Real Reasons Twitter Can Be So Brutally, Screamingly Terrible’ nymag 22 October, (2014) https://nymag.com/scienceofus/2014/10/real-reason-twitter-can-be-horribly-terrible.html#
- York, Jillian C. ‘Harassment Hurts Us All. So Does Censorship’ Medium September 13, (2013) httpss://medium.com/@jilliancyork/harassment-hurts-us-all-so-does-censorship–6e1babd61a9b