You are using an outdated browser.
Please upgrade your browser
and improve your visit to our site.

Does Twitter Really Want to Solve Its Harassment Problem?

The company has issued new guidelines and removed verifications for white supremacists. But it's unclear whether it can change its culture.

Oli Scarff/Getty Images

With the recent rollout of yet another set of anti-harassment initiatives, it looks like Twitter is beginning to take its reputation as a haven for abuse seriously. While slurs, death threats, and stalking have been rampant on its platform for almost a decade, as Buzzfeed’s Charlie Warzel reported last summer, the 2016 presidential election became a major watershed moment for Twitter’s public image. Suddenly, the company that once considered itself “the free speech wing of the free speech party” was forced to consider how to extricate certain kinds of speech from its platform. Early this month, Twitter announced a series a impending updates to its usage rules, particularly in sections concerned with abusive behavior, self-harm, spam, and graphic violence. “Online behavior continues to evolve and change, and at Twitter, we have to ensure those changes are reflected in our rules in a way that’s easy to adhere to and understand,” the company said in a November 3rd blog post.

To some, any news is good news from a platform that even by its own account has been negligent in attending to harassment both at a user and infrastructural level. In November 2016, just shortly after the election, the company introduced its first major anti-abuse feature in over a year (since the automatic content filter in 2015). “What makes Twitter great is that it’s open to everyone and every opinion,” the company wrote. “Because Twitter happens in public and in real-time, we’ve had some challenges keeping up with and curbing abusive conduct.” Twitter unveiled a three-pronged approach to its harassment problem which included an expansion of the mute feature, a new option to report “hateful conduct,” and a better educated support team who would receive “special sessions on cultural and historical contextualization of hateful conduct.” Twitter Inc. carefully avoided making any promises. “We don’t expect these announcements to suddenly remove abusive conduct from Twitter,” the company concluded, “No single action by us would do that.” It’s a convenient precedent.

But after taking so long to publicly acknowledge and then address its abuse problem, many users found the new policy underwhelming. Following the November 2016 announcement, users couldn’t help but notice something suspicious about improvements preoccupied with merely hiding “hateful conduct” behind opt-in mute features and enhanced content filters. Many women of color, for whom harassment is a long-embedded feature of the Twitter experience, have been attentive to the ever conciliatory nature of the company’s stance on abuse. For Motherboard, Brianna Wu, a primary target of Gamergate, noted how much of the 2016 update focuses on concealing abuse from view rather than anticipating how it arises or eliminating it altogether. Twitter’s statement is similarly “evasive,” she writes, and indeed, the comments discuss abuse as an internet-wide phenomenon, yet fail to mention how or why Twitter specifically has become a hotbed for this behavior.

It didn’t seem coincidental, either, that these 2016 adjustments came shortly after Disney chose not to purchase the company, a decision that had been under consideration as recently as the fall of that year. Bloomberg cited bullying as one concern that ultimately led Disney to look elsewhere. “What’s happened is, a lot of the bidders are looking at people with lots of followers and seeing the hatred,” says CNBC’s Jim Kramer. “I know that the haters reduce the value of the company.” A reputation as the digital headquarters for the current wave of white nationalism and misogyny isn’t just bad PR—it’s also unprofitable.

Though the bland corporate language of Twitter’s policy statements refuses to name any pointed examples, it’s easy to see where the past few years of high-profile harassment and the 2016 election have wormed their way into how the company re-articulates its policies. These updates tend to address racist and gendered abuse, or at least Twitter imagines they do. More recently, an October 27th addition of “non-consensual nudity” as prohibited behavior has been interpreted as the company’s anti-revenge porn measure, and arrived not long after reality star Rob Kardashian tweeted out private nude photos of ex-partner Blac Chyna. The rules now also include a note on “unwanted sexual advances” which says users “may not direct abuse at someone by sending unwanted sexual content, objectifying them in a sexually explicit manner, or otherwise engaging in sexual misconduct.” In the months ahead, Twitter will implement what it calls “past relationship interaction signals” to help determine whether an interaction is consensual or not.

Still, these halfhearted measures mean that Twitter continues to delegate the regulation of everyday abuse to its users, without themselves having to alienate harassers—or affect its usage numbers. That position extends to the company’s definition of hateful conduct, which carefully avoids naming the app’s most vulnerable users—women, gay and trans people, and people of color. Instead, rules prohibit violence, threats, or harassment “on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” Such a description is unhelpfully vague considering years of evidence showing that it is white users, cis and straight users, and men who most often harass women (especially women of color) and queer people (especially queer people of color).

With such rigorously imprecise phrasing, Twitter opens the door to spurious accusations that try to flip those categories on their head. White people can claim to be victims of “reverse racism,” men can claim to be victims of “misandry.” It doesn’t take much imagination to see how “hateful conduct” can be co-opted by white supremacists and misogynists with the opportunity to label anything that smells like a critique of whiteness, patriarchy, or heteronormativity as a statement of hate. This has already begun to happen.

And, indeed, in the year since the new policy took effect we’ve seen the dangers of such imprecision, as black voice after black voice finds themselves suspended or shadow-banned, while the racists who flocked to their accounts to deliver racist attacks receive no such punishment. This summer Anthony Oliveira (@meakoopa), a queer writer and scholar, was suspended shortly after linking the Instagram account of a homophobe in his mentions. Twitter did not inform Oliveira why her was disciplines and declined comment on the suspension. While Twitter has gotten its due pats on the back for kicking out the likes of Martin Shkreli and Milo Yiannopoulos, these actions only came after their respective harassment campaigns became public enough for CEO Jack Dorsey to notice. Twitter seems to have no problem penalizing everyday black, brown, and queer users, meanwhile it takes a high profile, potential PR disaster for open white supremacists to face similar treatment. Such uneven enforcement should cause us to wonder whose private information is truly sacred. Whose privacy is Twitter really protecting?

In a recent version of its rules, Twitter claimed to “believe in freedom of expression and in speaking truth to power.” Originated by queer Civil Rights activist Bayard Rustin and a common refrain in artists and activists, “speaking truth to power” is a phrase that affirms the necessary and often dangerous work of telling it like it is in the face of institutions and oppressors who would prefer silence. As of November 3rd, the day Twitter announced its new updates, Rustin’s phrase is gone. “We believe in freedom of expression and open dialogue,” it now reads. Not long after, Twitter verified Jason Kessler, the white supremacist organizer of the violent Unite the Right rally in Charlottesville, Virginia that ended with the murder of anti-racist activist Heather Heyer. This small change rather encapsulates the entirety of Twitter’s methodology, one that could—but would prefer not to—speak truths about the powerful users who abuse the platform. In the face of real hate and violence, Twitter chooses silence.

As predicted, Twitter’s stance against hateful content has not eliminated harassment. Twitter has at least noticed that the problem persists. Twitter Safety’s new version of its rules aims to “clarify” current policies and elaborate upon how it enforces rules and why. “While the fundamentals of our policies and our approach have not changed,” said the November 3rd post, “this updated version presents our rules with more details and examples.” In other words, these updates reflect the same rules merely elaborated, and do not reflect substantive changes in company ethics and priorities. These are not new rules. This month’s updates are part of a larger Twitter Safety initiative that officially began October 27th and will continue through January 10th. This week, as part of the initiative, the company removed blue checkmark verifications for users who routinely sprout racist bromides, including the white nationalist Richard Spencer and the right-wing troll Laura Loomer—though it did not remove their accounts from the site.

With the dizzying progression of updates, it feels like something, or a lot of somethings, are being done. But whether all that talk of change and heightened enforcement will actually prove reparative for the issues that plague Twitter remains uncertain. There’s reason to suspect that much like last time, these improvements are only superficial. Some of them they might actually make things worse.