After 20,000 or more New Hampshire voters received a call with the artificial-intelligence-doctored voice of President Joe Biden asking them to skip the state’s primary in January, state officials were in a quandary.
Attorney General John M. Formella launched an investigation alongside others into the robocall that urged recipients to “save your vote for the November election,” ultimately identifying a Texas-based organization as the culprit. But New Hampshire lawmakers who say simply identifying the origins of these deepfakes isn’t enough are backing legislation that would prohibit them within 90 days of an election unless they’re accompanied by a disclosure stating that AI was used.
New Hampshire is now one of at least 39 states considering measures that would add transparency to AI-generated deepfake ads or calls as political campaigns intensify ahead of the November presidential election. The state’s measure passed in the House but not the Senate.
Like New Hampshire’s bill, other states’ efforts are largely focused on identifying content produced using AI as opposed to controlling that content or prohibiting its distribution, according to Megan Bellamy, vice president of law and policy at the Voting Rights Lab, a nonpartisan group that tracks election related laws in states.
“I think what we’re seeing is an effort [by states] to address a known, growing and evolving field of AI-generated content without overshooting it and crossing the line that would trigger First Amendment arguments or any other legal push backs,” Bellamy said in an interview.
In Wisconsin, Gov. Tony Evers, a Democrat, signed into law a measure that requires political ads and messages produced using synthetic audio and video or made using AI tools to carry a disclaimer. Failure to comply results in a $1,000 fine for each violation.
Fair-election groups like Voting Rights Lab say that doesn’t go far enough. The Wisconsin disclaimer requirement applies only to campaign-affiliated entities while leaving out other individuals and groups, Bellamy said.
She added that a $1,000 fine could lead a political action committee or a campaign to decide an AI-generated deepfake is worth it, if it goes viral and gets that message in front of voters.
The Florida legislature, meanwhile, passed legislation with a bit more teeth: failure to disclose the use of AI-enabled messages would result in a criminal misdemeanor punishable by up to a year in prison. The measure is awaiting the governor’s signature.
And Arizona is considering similar measures requiring disclaimers in 90-day period before an election, in which repeated failures could result in a felony charge.
Unlike the states, Congress seems explicitly interested in regulating the content of deepfakes. Several bills would prohibit their circulation, including a measure backed by Sens. Amy Klobuchar, D-Minn., Josh Hawley, R-Mo., Chris Coons, D-Del.; Susan Collins, R-Maine; Pete Ricketts, R-Neb.; and Michael Bennet, D-Colo., that would prohibit the distribution of AI generated material targeting a candidate for federal office.
Another backed by Sen. Richard Blumenthal, D-Conn., and Hawley would remove protections under Section 230 of a 1996 communications law for AI-generated content, which would force online platforms to face legal liability for posting such material, thereby likely forcing their removal.
Deepfakes have targeted presidential, congressional, and even local elections, Blumenthal, said at a recent hearing of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. Blumenthal is chair of the panel.
“Anyone can do it, even in the tiniest race,” he said. “In some ways, local elections present an even bigger risk. That’s a recipe for toxic and destructive politics. Congress has the power indeed the obligation to stop this AI nightmare.”
Neither measure has received a committee vote.
A tech solution?
Technology could help identify deepfakes by requiring AI companies to watermark or stamp any AI-generated content as having been produced using technology as opposed to audio and video generated by real humans.
But requiring such watermarking of campaign ads created by AI is no guarantee that all the creators of such ads will comply, Ben Colman, CEO and founder of Reality Defender, a company that specializes in detecting deepfakes, told the Senate Judiciary subcommittee.
“The challenge of that is it presupposes that everybody’s going to follow those same rules,” Colman said. Bad actors “just aren’t going to follow the rules.”
That may be the case with the New Hampshire robocall, which officials say violated laws already on the books.
North Carolina Attorney General Joshua H. Stein’s office, acting as part of a multistate anti-robocall task force, in February sent a letter to Life Corp., of Arlington, Texas, accusing it of responsibility. The task force accused it of spoofing an originating phone number for 20,000 robocalls two days before the state’s Democratic primary.
“The Task Force also has reason to believe there was an intention to cause harm to prospective voters by attempting to discourage them from exercising their constitutionally protected right to vote and to cause harm to the subscriber of the phone number that was spoofed,” it said.
The letter says the calls may be violations of the 1991 Telephone Consumer Protection Act and the 2009 Truth in Caller ID Act, as well as state consumer protection statutes.
Although states are in charge of conducting federal elections, the challenge of confronting AI-generated deepfake ads and messages can’t be handled by states alone, David M. Scanlan, New Hampshire’s secretary of state, said at a hearing of the Senate Judiciary privacy and technology panel.
“At some point, I believe that there is a federal component to this, because it’s going to be a national problem,” Scanlan said. “You know these things in a national election are going to be generated nationally, whether it’s foreign actors or some other malicious circumstances,” he said, referring to deepfake audio and video messages.
“And I think we need uniformity, and the power of the federal government to help put the brakes” on the use of AI generated deepfake campaign ads, Scanlan said.
___
©2024 CQ-Roll Call, Inc
Distributed by Tribune Content Agency, LLC.