Twitter Cracks Down On Nudity, Harassment & Violence… Again

Image courtesy of Tom Raftery

For years now, Twitter has been rolling out tool after tool designed to combat harassment and abuse. While the company says these initiatives are working — despite a lack of data — it’s also doubling down on its efforts: Twitter will soon roll out another set of changes to protect users.

Wired reports that Twitter is prepping to unveil a set of new policy updates in coming weeks with a focus on once again cutting down on harassment and abuse between users.

The policy changes — highlighted in an email to the company’s Trust and Safety Council — include a more rigid stance on “nonconsensual nudity,” or images or videos shared without permission, and a streamlined process to report “unwanted sexual advances.”

Non-Consensual Nudity
While Twitter cracked down on so-called revenge porn in 2015 through a change to its terms of service, the company is increasing its efforts to combat the harassment.

Revenge porn is when someone else puts nude photos or videos online without the consent of the subject, and it’s an issue many sites have been dealing with.

Currently, Twitter says that a person who Tweets non-consensual nudity — either maliciously or inadvertently — are temporarily blocked and the content is deleted. Twitter then permanently deletes the user’s account if they post the non-consensual nudity again.

With the upcoming changes, Twitter says it will “immediately and permanently suspend” any account identified as the original poster or source of non-consensual nudity. The same goes for a user who makes it clear they are intentionally posting said content to harass their target.

The company says it will do a full account review whenever it receives a report about non-consensual nudity Tweets. If the account appears to be dedicated to such posts, it will be suspended immediately.

Finally, Twitter says it is expanding its definition of non-consensual nudity to include content such as upskirt images, “creep shots,” and hidden camera content.

“While we recognize there’s an entire genre of pornography dedicated to this type of content, it’s nearly impossible for us to distinguish when this content may/may not have been produced and distributed consensually,” the email reads. “We would rather error on the side of protecting victims and removing this type of content when we become aware of it.”

Unwanted Advances
Twitter generally allows pornographic content on its site. To differentiate when a conversation is consensual or not, Twitter currently relies on and takes action only if it receives a report from a participant in the conversation.

Going forward, the social media site says the rule will make it more clear that harassing behavior is not acceptable.

“We will continue taking enforcement action when we receive a report from someone directly involved in the conversation,” the email reads.

The company will also debut tools to improve the ability of bystanders and witnesses to report an interaction is unwanted, and will also use signals from previous interactions — such as when someone is blocked or a conversation is muted — to make a determination and act on the content accordingly.

Hate Symbols & Violent Groups
Twitter notes that it is still defining the exact scope of what will be covered when it comes to an upcoming policy on hate symbols and imagery, as well as violent groups.

At a high level, Twitter says that hateful imagery, hate symbols, and other similar content will be considered sensitive media and treated in a similar way to adult content and graphic violence.

As for violent groups, the company says it will take enforcement action against organizations that use or have historically used violence as a means to advance their cause.

Tweets That Glorify Violence
Another new policy will revolve around Tweets that promote direct threats, vague violent threats, and hopes or wishes for physical harm.

Moving forward, Twitter will also take action against content that glorifies and/or condones violence.

Example Tweets include: “Praise be to <terrorist name> for shooting up <event>. He’s a hero!” or “Murdering <x group of people> makes sense. That way they won’t be a drain on social services.”

“We realize that a more aggressive policy and enforcement approach will result in the removal of more content from our service,” Twitter says in the email. “We are comfortable making this decision, assuming that we will only be removing abusive content that violates our Rules.”

In an effort to ensure that only violating Tweets are removed and action taken against users, Twitter’s product and operational teams will be work to improve the appeals process and turnaround times for their reviews.

A rep for Twitter tells Wired that the company plans to unveil more details on the new guidelines in coming weeks.

Want more consumer news? Visit our parent organization, Consumer Reports, for the latest on scams, recalls, and other consumer issues.