Facebook: There Is No “Wonder Machine” To Automatically Detect Hate Speech, Abuse Image courtesy of Steve R.
Social media services like Facebook and Twitter have taken a lot of heat in recent years over many of the hateful things their users share with the rest of the world. Some have accused these companies of not doing enough to prevent this sort of behavior in the first place, but Facebook says there is really not much else it can do right now.
Lawyers for Facebook were in court in Germany today. The company is fighting a court order that would require Facebook to block users from posting photos of German Chancellor Angela Merkel and teenage Syrian refugee Anas Modamani. The image has been misappropriated by Islamaphobic Facebook users, some of whom have falsely claimed that Modamani tried to kill a homeless person, or that he took part in terror attacks.
Modamani’s attorney argued that Facebook should be compelled to block such libelous use of this image, but Facebook countered that this simply isn’t possible, given the sheer size of Facebook’s audience.
“There are billions of postings each day,” Facebook lawyer Martin Munz told the court, according to Bloomberg. “You want us to employ a sort of wonder machine to detect each misuse. Such a machine doesn’t exist.”
Facebook does give users the ability to report potential abuse, but the company currently has few legal obligations to remove disputed content. That might change, at least in Germany, where Bloomberg notes that Merkel’s government is considering a law that would require Facebook to respond to user complaints within one day or face financial penalties.
Want more consumer news? Visit our parent organization, Consumer Reports, for the latest on scams, recalls, and other consumer issues.