Facebook Unveils Photo-Matching Tool In Effort To Crack Down On Revenge Porn Image courtesy of Poster Boy
Two years ago, Facebook clarified its approach to complaints about supposedly offensive posts. Now, the company is circling back, releasing new tools to ensure people can’t re-share intimate images previously reported and tagged as revenge porn.
Facebook announced Wednesday that it has implemented a photo-matching tool that will prevent someone from re-sharing such photos on Facebook, Messenger, and Instagram.
The tool, developed in partnership with safety experts, is part of Facebook’s “ongoing effort to help build a safe community on and off” the social network.
Here’s how the new tool works:
Users who see a photo they believe was shared without permission can report it by using the “Report” link that appears when you tap the downward arrow or “…” next to a post.
The post is then sent to “specially trained” representatives of Facebook’s Community Operations team for review. If it is deemed to be in violation of the standards, it will be removed.
The company will then use a photo-matching technology that will prevent further attempts to share the image on Facebook, Messenger, and Instagram.
When someone tries to share the image, Facebook will send them an alert that sharing the post violates policies and that it won’t be shared.
Facebook notes that it does have an appeals process for users who believe an image was taken down in error.
The company says that it developed the photo-matching tools after meeting with more than 150 safety organizations last year.
“Facebook is in a unique position to prevent harm, one of our five areas of focus as we help build a global community,” Antigone Davis, Head of Global Safety at Facebook, said in a blog post. “We are grateful for all of the advice and assistance we received in developing these tools and resources.”
Facebook has been under increasing pressure — from both its users and governments around the world — to minimize abuse.
The company recently told a German court that it can react to reports of misuse, but the sheer size of the Facebook user base makes it difficult to be more proactive.
“There are billions of postings each day,” one Facebook lawyer told the court involving a case where the site was being told to prevent users from sharing one particular libelous meme. “You want us to employ a sort of wonder machine to detect each misuse. Such a machine doesn’t exist.”
Today’s announcement appears to be a step toward a more preemptive approach, giving Facebook the ability to try to prevent repeat instances of abuse.
Want more consumer news? Visit our parent organization, Consumer Reports, for the latest on scams, recalls, and other consumer issues.