How Facebook Decides What Needs To Be Deleted Image courtesy of Facebook
Everything is on Facebook — but some things shouldn’t be. The job of determining what needs to go, and why, is a high-stakes one with a lot of confusion. And now, dozens of leaked documents from inside Facebook show just how hard those calls can be for the moderators who have to make them.
The Guardian obtained several different internal training documents showing how Facebook teaches its moderating staff what is and isn’t acceptable on the site. It’s hosting and explaining several of them on its site in a series it’s calling the Facebook Files.
The Facebook Files that the paper has amassed include manuals on how to handle revenge porn, sex and nudity in art, sexual activity in general, bullying, cruelty to animals, graphic violence, threats of violence, and child abuse, among other upsetting issues. (The links lead to articles about or galleries of manuals about the content that include language and descriptions that may be upsetting, but do not include graphic images.)
At its last investor presentation earlier in May, Facebook said it now has approximately 1.94 billion monthly active users worldwide. That is approximately 25% of the entire human population of Earth, all using Facebook.
To handle the flow of information that comes from two billion users, Facebook has 4,500 (soon to be 7,500 total) content moderators. That means, even after all the new hires, that users will still outnumber moderators by more than 250,000 to 1 — so mods need to have a system for how to move quickly through everything flagged for their attention.
Content moderators told The Guardian that they get two weeks of training, plus a big handful of manuals.
“We aim to allow as much speech as possible but draw the line at content that could credibly cause real harm,” one of the training manuals says. “We aim to disrupt potential real world harm caused from people inciting or coordinating harm to other people or property by requiring certain details be present in order to consider the threat credible.”
In other words, Facebook determines whether or not it’s a problem when you say, “burn it all down and salt the ashes” by looking at context to see whether you’re “calling for violence in generally facetious and unserious ways” to “express disdain or disagreement,” or whether you’re actually outlining a plan to literally go burn something down.
For Facebook, the big differentiator seems to be intent. If a horrific image of violence or abuse appears to be educational or can increase “awareness,” it gets to stay. If it appears to be “celebratory” or deliberately sadistic, it goes.
A whole range of disturbing content in between can be “marked as disturbing,” meaning it won’t auto play and may be age-gated to only users over 18, but can otherwise stay. Photos may also be left alone, while a live action video of the same action may be marked as disturbing.
For example, under the “child abuse” section of the “graphic violence” handbook, Facebook says in bullet points, “We do not action photos of child abuse. we ‘mark as disturbing’ videos of child abuse. We remove imagery of child abuse if shared with sadism and celebration.”
Facebook’s rationale? The handbook says, “We allow ‘evidence’ of child abuse to be shared on the site to allow for the child to be identified and rescued, but we add protections to shield the audience.” The document does not mention protections to shield the child.
Meanwhile, the “credibility” of death threats seems to have as much to do with the target as it does with the statement. Much of the manual shown about violent statements has to deal with threats against vulnerable persons or groups.
Vulnerable persons are those likely to be targets: heads of states, their successors, and candidates for the role; law enforcement officers, witnesses, and informants; activists and journalists; and anyone who’s on a known hit list or a previous target of assassination attempts.
Some groups of people are also considered to be vulnerable, as a group — but which groups are considered vulnerable has both global and local variations.
For example, one training slide shows “Homeless people,” “Foreigners,” and “Zionists” to be considered globally vulnerable, but “drug dealers, drug users and drug addicts” as vulnerable specifically in the Philippines.
In sample statements, Facebook considers, “We should put all foreigners into gas chambers” to be a credible threat of violence, but “Kick a person with red hair,” “Let’s beat up fat kids,” and other statements that you might think sound pretty specific into the bucket of statements to be left alone.
If threats of violence are considered credible, Facebook mods are supposed to delete or escalate them. Otherwise, they just hang out there.
The guidelines seem to have a lot of contradictory information and room for error in them, overall. In short, it’s confusing at best. And moderators don’t act proactively; they only step in when someone actually takes the time to submit a report on a post, saying what they find objectionable and why.
Moderators, however, tell the Guardian that their work is overwhelming. In a statement to the Guardian, Facebook confirmed that reviewing the worst humanity has to offer can easily lead to burnout from their “challenging and difficult job.”
“A lot of the content is upsetting. We want to make sure the reviewers are able to gain enough confidence to make the right decision, but also have the mental and emotional resources to stay healthy. This is another big challenge for us,” the company told The Guardian.
Want more consumer news? Visit our parent organization, Consumer Reports, for the latest on scams, recalls, and other consumer issues.