September 21, 2024

California passes election ‘deepfake’ laws, forcing social media companies to take action

california social media

Jim Wilson / New York Times, file

Gov. Gavin Newsom of California speaks during a news conference in Sacramento, Calif., Jan. 10, 2024. California will now require social media companies to moderate the spread of election-related impersonations powered by artificial intelligence, known as “deepfakes,” after Gov. Gavin Newsom, a Democrat, signed three new laws on the subject Tuesday, Sept. 17, 2024.

California will now require social media companies to moderate the spread of election-related impersonations powered by artificial intelligence, known as “deepfakes,” after Gov. Gavin Newsom, a Democrat, signed three new laws on the subject Tuesday.

The three measures, including a first-of-its kind law that imposes a new requirement on social media platforms, largely deal with banning or labeling the deepfakes.

Only one law will take effect in time to affect the 2024 presidential election, but the trio could offer a road map for regulators across the country who are attempting to slow the spread of the manipulative content powered by AI.

The laws are expected to face legal challenges from social media companies or groups focusing on free speech rights.

Deepfakes use AI tools to create lifelike images, videos or audio clips resembling actual people. Though the technology has been used to create jokes and artwork, it has also been widely adopted to supercharge scams, create nonconsensual pornography and disseminate political misinformation.

Elon Musk, owner of social platform X, has posted a deepfake to his account this year that would have run afoul of the new laws, experts said. In one video viewed millions of times, Musk posted fake audio of Vice President Kamala Harris, the Democratic nominee, calling herself the “ultimate diversity hire.”

California’s new laws add to efforts in dozens of states to limit the spread of the AI fakes around elections and sexual content. Many have required labels on deceptive audio or visual media, part of a surge in regulation that has received wide bipartisan support. Some have regulated election-related deepfakes, but most are focused on deepfake pornography. There is no federal law that bans or even regulates deepfakes, though several have been proposed.

California policymakers have taken an intense interest in regulating AI, including with a new piece of legislation that would require tech companies to test the safety of powerful AI tools before releasing them to the public. The governor has until Sept. 30 to sign or veto the other legislation.

Two of the laws signed Tuesday place limits on how election-related deepfakes — including those targeting candidates and officials or those questioning the outcome of an election — can circulate.

One takes effect immediately and effectively bans people or groups from knowingly sharing certain deceptive election-related deepfakes. It is enforceable for 120 days before an election, similar to laws in other states, but goes further by remaining enforceable for 60 days after — a sign that lawmakers are concerned about misinformation spreading as votes are being tabulated.

The other will go into effect in January, and requires labels to appear on deceptive audio, video or images in political advertisements when they are generated with help from AI tools.

The third law, known as the “Defending Democracy from Deepfake Deception Act,” will go into effect in January and require social media platforms and other websites with more than 1 million users in California to label or remove AI deepfakes within 72 hours after receiving a complaint. If the website does not take action, a court can require it to do so.

“It’s very different from other bills that have been put forth,” said Ilana Beller, a organizing manager for the democracy team at Public Citizen, which has tracked deepfake laws nationwide. “This is the only bill of its kind on a state level.”

All three apply only to deepfakes that could deceive voters, leaving the door open for satire or parody — so long as they are labeled — and would be effectively limited to the period surrounding an election. Though the laws only apply to California, they govern deepfakes depicting presidential and vice-presidential candidates along with scores of statewide candidates, elected officials and election administrators.

Newsom also signed two other laws Tuesday governing how Hollywood uses deepfake technology: one requiring explicit consent to use deepfakes of performers, and another requiring an estate’s permission to depict deceased performers in commercial media such as movies or audiobooks.

Lawmakers have generally not passed laws that govern how social media companies moderate content because of a federal law, known as Section 230, that protects the companies from liability over content posted by users. The First Amendment also offers wide protections to social media companies and users, limiting how governments can regulate what is said online.

“They’re really asking platforms to do things we don’t think are feasible,” said Hayley Tsukayama, the associate director of legislative activism at the Electronic Frontier Foundation, a digital rights group in San Francisco, which wrote letters opposing the new laws. “To say that they’re going to be able to identify what is really deceptive speech, and what is satire, or what is First Amendment protected speech is going to be really hard.”

The law’s supporters have argued that because it imposes no financial penalties on companies for failing to follow the law, Section 230 may not apply.

A number of free speech and digital rights groups, including the First Amendment Coalition, have strenuously opposed the laws.

“Some people may, of course, disseminate a falsehood — that’s a problem as old as politics, as old as democracy, as old as speech,” said David Loy, the legal director for the First Amendment Coalition. “The premise of the First Amendment is that it’s for the press and public and civil society to sort that out.”

This article originally appeared in The New York Times.