Facebook’s Robots Are Working Hard On Content Moderation So Humans Don’t Have To

Image courtesy of Poster Boy

Sometimes it’s a bad thing when a robot gets invented to do a human job. And other times, it can be a relief, because the job was really terrible for any human to do. And that’s the tactic Facebook is taking with content moderation now, getting its AI to identify and “quarantine” offensive content before any human has to.

As TechCrunch reports, Facebook has recently hit a major milestone in their AI training: software now report more offensive photos on the massive global platform than humans do, and that’s big.

When the internet was entirely made of text, content moderation was one thing. Mods could more often than not take the “sticks and stones” approach and delete or disemvowel nasty vitriol as needed. But now we’re in the multimedia era, and content moderation means shifting through piles of truly horrific HD video and images to determine just how horrific they are.

That means moderating content can be a really awful job. Imagine content so bad you actually flag it for someone to remove — now imagine having your full-time job to be staring at all that content, eight hours a day.

Hate speech, threats, child pornography, and animal cruelty add up; it takes a toll on any human worker. Even writers who only have to moderate comments to their articles (as opposed to a whole site like Facebook) can get badly burned out by the constant swarm of images.

So! Enter the AI. Using tech to filter is nothing new, but it’s a lot easier for a machine to identify a character string that spells a naughty word than it is to correctly identify the contents of an image or video. At least, it has been.

Facebook, though, has access to a lot of data. A LOT of data. And they can use all that data, and all their computing power, to make their software smarter. So they are. In fact, AI is the major cornerstone of Facebook’s entire current ten-year plan.

Facebook’s AI, Facebook’s Director of Engineering for Applied Machine Learning, Joaquin Candela, explained to TechCrunch, is doing everything from audio-captioning images for visually impaired users to individually ranking items in your news feed for you. And now, with video hosting and live streaming being major parts of the Facebook experience, the AI is getting smarter about video, too.

Facebook wants their AI to be able to automatically tag users in videos through facial recognition, the same way they do in still images. Along with that, they’ve built a system for automatically categorizing video by topic, so that all the cat videos can present themselves to you on cue.

Creepy? Yeah, probably. But it does come with a silver lining: if you’ve got a machine that can automatically recognize and categorize content, that also means it can flag the really problematic stuff. “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people,” Candela told TechCrunch. “The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human.”

With a platform Facebook’s size, that’s a simply staggering number of images. Globally, 400,000 new posts are published every minute (that’s about 576 million per day) and another 180 million comments are left on public pages (about 259.2 billion, daily). That is, in the aggregate, a crapton of data. You need robots for volume alone — and if it spares some human eyes, so much the better.

Another Facebook developer told TechCrunch that the same tech is in use, or going to be in use, across every Facebook property. Instagram already uses it, WhatsApp uses parts, and they’re working on ways to get more of it into Oculus (where terrible content would not just be a thing you see, but a thing you experience).

They’re also sharing outside of Facebook. Developers told TechCrunch that the tech giant has held meetings with Netflix, Google, Uber, Twitter, and others to share their AI applications and discuss design details. Is it altruistic? Sure, as much as taking over the world by getting all the competition to use your tools ever is.

But Facebook says it’s more than that. “I personally believe it’s not a win-lose situation, it’s a win-win situation. If we improve the state of AI in the world,” the developer told them, “we will definitely eventually benefit. But I don’t see people nickel and diming it.”

Facebook spares humans by fighting offensive photos with AI [TechCrunch]

Want more consumer news? Visit our parent organization, Consumer Reports, for the latest on scams, recalls, and other consumer issues.