Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  

Facebook launches new system to spot ‘offensive’ memes and ‘fake news’

A man using his iPad at a desk. (LinkedIn Sales Navigator/Pexels/Released)
September 17, 2018

Facebook made a huge announcement last week regarding a new system they are launching, called Rosetta.

Rosetta is an extensive learning system that will scan images on the social platform to automatically identify “inappropriate or harmful content.” Rosetta can process more than a billion images a day, Fox News reported.

Currently, there are approximately 350 million photos being uploaded to Facebook each day. Facebook and Instagram have turned to artificial intelligence to help understand these images.

The Facebook Al will do much more than identify offensive memes, however. It is also capable of detecting words that appear in images of storefronts, street signs or restaurant menus.

“Understanding the text that appears on images is important for improving experiences, such as a more relevant photo search or the incorporation of text into screen readers that make Facebook more accessible for the visually impaired. Reading text in images is important in identifying “inappropriate or harmful content and keep our community safe,” Facebook said.

This announcement comes after reports that 26 percent of Americans have deleted their Facebook apps because of fake news reports, Mashable reported.

Fake news reports have been a growing concern for Facebook since the 2016 Presidential election.

Rosetta will be able to scan images for text, and then uses text recognition to identify what the text actually says, as part of the two-step process. Rosetta will support different languages with encodings in Arabic and Hindi, Facebook said.

Facebook and Instagram’s teams are already utilizing Rosetta as a way of enhancing the effectiveness of photo searches, improving the accuracy of News Feed images, and to identify instances of hate speech.

Facebook has not had much luck in effectively identifying fake news or hate speech in the past.

In fact, their own training guides inaccurately identified an image depicting the 2010 earthquake in Jeiegu, China, instead attributing the image to that of a Myanmar genocide.

Recently, a vulnerability was discovered with Google’s Perspective AI, which is used to spot harmful comments. The tool can be bypassed if the scanned text includes typos, spaces between words, or the addition of unrelated words.

Facebook said in the future it could apply the same technology toward understanding text that appears in videos, as well.