Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  

Social media should be accountable for ‘deepfake’ content, intelligence experts say

Laptop in a dark room. (PxHere/Released)

Congress should amend portions of U.S. law that allow social media companies to enjoy immunity for content posted on their platforms in light of the significant dangers posed by artificial intelligence-enabled fake videos, a panel of experts told the House Intelligence Committee at a hearing Thursday.

Social media companies should be asked to exercise reasonable moderation of content, and U.S. government agencies should educate citizens on how to tell if a video is fake and invest in technologies that will aid in such determinations, the experts said.

The hearing, led by House Intelligence Committee Chairman Adam B. Schiff, comes as lawmakers and technologists fear that Russia, China and other foreign powers are likely to scale up their attack on U.S. elections in 2020 with “deepfake” videos that will leave American voters unable to distinguish between real videos and those that are manipulated intentionally.

In 2016, a Kremlin-backed troll farm created fake social media accounts to mislead American voters, but “three years later, we are on the cusp of a technological revolution that could enable even more sinister forms of deception and disinformation by malign actors, foreign or domestic,” Schiff said in his opening remarks at the hearing.

Artificial intelligence technologies now allow video and audio of a person to be manipulated to make the person look and say things the person has never said or done. Such videos “enable malicious actors to foment chaos, division or crisis and they have the capacity to disrupt entire campaigns, including that for the presidency,” Schiff said.

Having unwittingly enabled fake accounts on their platforms in 2016, social media companies once again face scrutiny in how they handle misleading videos. Last month, Facebook faced intense criticism for a doctored video — altered using old-fashioned editing means — of Speaker Nancy Pelosi that shows her appearing to slur her words, as if she’s intoxicated. Facebook refused to take down the video and has said it would tweak its algorithm to reduce exposure for the video.

The Communications Decency Act, which exempts social media companies from being considered publishers of material that appears on their platforms, may be allowing too much leeway to the companies, Schiff said. “Should we do away with that immunity?”

Congress should amend the law “to condition the immunity on reasonable moderation practices rather than the free pass that exists today,” Danielle Citron, a law professor at the University of Maryland told the committee. The current exemption in law gives the social media companies no incentive to take down “destructive, deepfake content,” she said.

Citron said deepfake videos not only can be used by foreign and domestic perpetrators against political opponents but could be used to hurt companies, for example, by having the CEO say something derogatory just hours before a public offering, which could lead to a collapse in its stock prices.

Facebook co-founder and CEO Mark Zuckerberg also has called for regulating online platforms, and in an op-ed in The Washington Post in March, he wrote such regulations should address harmful content, election integrity, privacy and allowing users to take their data with them.

Tweaking the law too broadly to suggest that all forms of manipulated videos should be taken down could, however, hurt satirical takes on politicians and those in power, said Clint Watts, a senior fellow at the German Marshall Fund.

Federal agencies should quickly refute fake videos with factual content, politicians of both parties and their campaign staff should work with social media companies to respond quickly to smears, and the administration should develop aggressive measures, including sanctions, to go after foreign troll farms that promote fake videos, Watts said.

Technology is advancing so fast that it is now possible for perpetrators to cover up evidence that they have manipulated a video, said David Doermann, a professor of computer science at the University at Buffalo. “A lot of trace evidence can be destroyed with simple manipulation on top of deepfake.”

Still, the Pentagon’s Defense Advanced Research Projects Agency, or DARPA, where Doermann previously worked, has been developing technologies to tell if an image or a video has been altered, he said.

At the moment, such techniques to detect fakes can be applied to one case at a time, but the “problem is doing it at scale,” Doermann said.

If the techniques for spotting video manipulation can be automated, then platforms such as Facebook and Twitter could detect and stop deepfake videos before they’re published online, instead of trying to spot and stop them after the fact, Doermann said.

Once fake videos and rumors are published it is hard to dislodge the bad information from people’s minds, said Jack Clark, the policy director for OpenAI, a research organization that advocates for safe artificial intelligence technologies.

“Fact checks tend not to travel as much as the initial message,” Clark said.

———

© 2019 CQ-Roll Call, Inc., All Rights Reserved

Distributed by Tribune Content Agency, LLC.