Facebook’s Robot Army May Soon Determine If Your Live Video Is Offensive Image courtesy of Eric BEAUME
Facebook — the company whose artificial intelligence has had a wee bit of trouble distinguishing between fake and authentic news sources — believes that its machine censors can be deployed to determine if a users’ live video stream is too naughty or offensive.
This is according to Reuters, which reports that Facebook is hoping to turn to “an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” to potentially shut down users who are broadcasting live video that violates the site’s community standards guidelines.
Like most websites that allow users to freely post content, Facebook has long relied on its user base to flag and report allegedly offensive posts, images, and video. The use of AI — still in the research stage — puts Facebook in the position of potentially catching this material before its users do, though it appears that a real human may still need to make the final decision of whether a flagged video violates those standards.
Want more consumer news? Visit our parent organization, Consumer Reports, for the latest on scams, recalls, and other consumer issues.