machine learning

opacity

Increased Use Of Machine Learning, Facial Recognition Outs Sex Workers’ Real Names

If you operate a video-sharing site with millions of user-uploaded clips, it sounds like a great idea to use software that is smart enough to identify some of the faces in those videos. The clips would be indexed more accurately, you might be able to more readily identify copyrighted content, but you could also be risking the privacy — and maybe the physical well-being — of those identified by the software. [More]

Google

Google Launches New Tool To Fight Toxic Trolls In Online Comments

“Don’t read the comments” is perhaps the most ancient and venerable of all internet-era axioms. Left untended, or even partially tended, internet comments have a way of racing straight to the bottom of the vile, toxic, nasty barrel of human hatred. But now, Google says it’s basically training a robot how to filter those for you, so human readers and moderators can catch a break. [More]

DocChewbacca

Evernote Backtracks On Privacy Policy Changes After User Outcry

Popular note-taking and general reminder app Evernote had big plans for 2017. In January, it was going to start feeding all your personal content to an algorithm in order to improve internal machine learning. But those plans allowed for human employees to peek over the robot’s shoulder to see your stuff, which users objected to loudly enough that now those plans are on hold. [More]

DocChewbacca

Evernote: Update To Privacy Policy Was “Communicated Poorly”

Evernote is a cross-platform application for taking notes and storing information, which inspires almost religious devotion in users. This week, though, some Evernote fans have grown disillusioned because of a change to the company’s privacy policy that details how Evernote employees can access and read users’ notes. Update: this change has been called off, and Evernote will only peek at the notes of users who opt in. [More]

Poster Boy

Facebook’s Robots Are Working Hard On Content Moderation So Humans Don’t Have To

Sometimes it’s a bad thing when a robot gets invented to do a human job. And other times, it can be a relief, because the job was really terrible for any human to do. And that’s the tactic Facebook is taking with content moderation now, getting its AI to identify and “quarantine” offensive content before any human has to. [More]