Connect with us

Social Media

Twitter’s updated T&Cs look clearer — yet it still can’t say no to nazis

Published

on

Twitter has taken a pair of shears to its user rules, shaving almost 2,000 words off of its T&Cs — with the stated aim of making it clearer for users what is not acceptable behaviour on its platform.

It says the rules have shrunk from 2,500 words to just 600 — with each of the reworded rules now encapsulated within a pithy tweet length (280 characters or less).

Though each tweet-length rule is still followed by plenty of supplementary detail — where Twitter explains the rationale behind it and provides examples of what not to do, and details of potential consequences. So the full rule-book is still way over 2,500 words.

“Everyone who uses Twitter should be able to easily understand what is and is not allowed on the service,” writes Twitter’s Del Harvey, VP of trust and safety, in a blog post announcing the changes. “As part of our continued push towards more transparency across every aspect of Twitter, we’re working to make sure every rule has its own help page with more detailed information and relevant resources, with abuse and harassment, hateful conduct, suicide or self-harm, and copyright being next on our list to update. Our focus remains on keeping everyone safe and supporting a healthier public conversation on Twitter.”

The newly reworded rules can be found at: twitter.com/rules

We’ve listed the tweet-sized rules below, without any of their qualifying clutter:

  • You may not threaten violence against an individual or a group of people. We also prohibit the glorification of violence.
  • You may not threaten or promote terrorism or violent extremism.
  • We have zero tolerance for child sexual exploitation on Twitter.
  • You may not engage in the targeted harassment of someone, or incite other people to do so. This includes wishing or hoping that someone experiences physical harm.
  • You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
  • You may not promote or encourage suicide or self-harm.
  • You may not post media that is excessively gory or share violent or adult content within live video or in profile or header images. Media depicting sexual violence and/or assault is also not permitted.
  • You may not use our service for any unlawful purpose or in furtherance of illegal activities. This includes selling, buying, or facilitating transactions in illegal goods or services, as well as certain types of regulated goods or services.
  • You may not publish or post other people’s private information (such as home phone number and address) without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.
  • You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.
  • You may not use Twitter’s services in a manner intended to artificially amplify or suppress information or engage in behavior that manipulates or disrupts people’s experience on Twitter.
  • You may not use Twitter’s services for the purpose of manipulating or interfering in elections. This includes posting or sharing content that may suppress voter turnout or mislead people about when, where, or how to vote.
  • You may not impersonate individuals, groups, or organizations in a manner that is intended to or does mislead, confuse, or deceive others.
  • You may not violate others’ intellectual property rights, including copyright and trademark.

Notably the rules make no mention of fascist ideologies being unwelcome on Twitter’s platform. Although a logical person might be forgiven for thinking such hateful stuff would naturally be prohibited — based on the core usage principles Twitter is stating here (such as a ban on threatening and/or promoting violence against groups of people including on the basis of their race, ethnicity and so on).

But for Twitter nazi-ism remains, uh, ‘complicated’.

The company recently told Vice it’s working with researchers to consider whether or not it should ban nazis. Which suggests its new ‘pithier’ rules are missing a few qualifying asterisks.

Here, we fixed one:

  • You may not threaten violence against an individual or a group of people*. We also prohibit the glorification of violence**. *unless you’re a nazi **white supremacists totally get a pass while we mull the commercial implications of actually banning racist hate

Another abuse vector that continues to look like a blindspot in Twitter’s rule-book is sex.

While the company does include both ‘gender’ and ‘gender identity’ among the many categories it stipulates that users must not direct harassment, at or promote violence against, it does not offer the same shield based on a user’s sex. Which appears to have resulted in instances where Twitter has deemed tweets containing violent misogyny to not be in violation of its rules.

Last month a Twitter UK public policy rep told the parliamentary human rights committee, which had raised the issue of the violent sexist tweets, that it believed the inclusion of gender should be enough to protect against instances of violent misogyny, despite having demonstrably failed to do so in the selection of tweets the committee put to it.

We’ve asked Twitter about its continued decision not to prohibit harassment and threats of violence against users based on their sex, as well as its ongoing failure to ban nazis and will update this report with any response.

In addition to editing down the wording of its rules, Twitter says it has thematically organized them under three new categories — safety, privacy, and authenticity — to make it easier for users to find what they’re looking for.

Though it’s not quite as at-a-glance clear as that on the rules page — which also includes a general preamble; a note on wider content boundaries; a section dealing with spam and security; and an addendum on content visibility restrictions that Twitter may apply in cases where it suspects an account of abuses and is investigating.

But, as ever, algorithmically driven platforms are anything but simple.

Hideously wordy T&Cs have of course been a tech staple for years so it’s good to see Twitter paying greater attention to the acceptable conduct signals it gives users — and at least trying to boil down a clearer essence of what isn’t acceptable behavior, albeit tardily.

But, equally, refreshed wording of what’s unacceptable makes it plainer that Twitter retains stubborn blind-spots that allow its platform to be a conduct for targeted racial hatred.

Perhaps these blindspots are commercially motivated, in the case of far right ideologies. Or perhaps Twitter’s leadership is still so drunk on its own philosophical koolaid it really has fuzzed the lines between fascism and, er, humanity.

If that’s the case, no pithily written rules will save Twitter from itself.

Don’t forget, this is a company that has been promising to get a handle on its abuse problem for years. Including — just last year — making a grand stance about wanting to champion ‘conversational health‘.

Yet it still can’t screw its courage to the sticking place and say no nazis.

Twitter’s multi-year struggles to respond to baked in hate might be farcical at this point — if the human impacts of amplifying racial and ethnic hatred weren’t a tragedy for all concerned.

And had it found a moral compass when it was first being warned about the rising tide of amplified abuse, it’s entirely possible one of its most high profile users might not be a geopolitical mega-bully known to retweet fascist propaganda.

Chew on that, Jack.

Continue Reading
Advertisement Find your dream job

Trending