September 11, 2024

AI’s threat to election integrity in focus at Las Vegas Ai4 summit

Artificial Intelligence Conference at MGM

Steve Marcus

Former Congressman Jim Moran, second from right, founder of Moran Global Strategies, speaks on an election integrity panel during Ai4, an artificial intelligence conference, at the MGM Grand Convention Center Tuesday, Aug. 13, 2024. Other panelists, from left, include Jennifer Morrell, Nina Jankowicz, and Josh Nanberg.

Nina Jankowicz, the former head of the U.S. Department of Homeland Security’s short-lived disinformation governance board, said the Russian government was using artificial intelligence to improve its tactics in spreading disinformation in the 2024 presidential election.

Messaging previously had the “clear and clumsy fingerprints of the Russian Federation on it,” including grammatical issues and misunderstood idioms, Jankowicz said Tuesday at the MGM Grand during Ai4, the world’s largest gathering of artificial intelligence leaders.

Now, along with resolving those problems, the Russian government can precisely target the people — like American voters — most likely to engage with it.

“I’ve dealt with a lot of disinformation aimed at me,” Jankowicz said. “But I also study the way that it moves around, and AI has certainly been an area of concern of mine for a long time.”

Jankowicz said she resigned one month into her tenure with the disinformation board because of the backlash. The board dissolved after just four months of existence.

Jankowicz, who now is CEO of an organization looking to quell right-wing attacks on disinformation research, was joined at the Las Vegas conference by former elections official Jennifer Morrell; Brandie Nonnecke, director of the CITRIS Policy Lab at the University of California, Berkely; and former Democratic U.S. Rep. Jim Moran of Virginia to discuss AI’s effect on election integrity.

The panel was run by Ai4, a yearly AI conference, as part of its first policy summit. Along with a keynote presentation from U.S. Rep. Jay Obernolte, R-Calif., the co-chair of the House’s Task Force on AI, the day included panels on artificial general intelligence and regulating AI.

Speakers were largely concerned with how AI could affect elections — something already seen in Nevada.

Former North Las Vegas Mayor John Lee, who was then vying for the Republican nomination in the state’s 4th Congressional District against David Flippo, sued his opponent in June over a website allegedly hosting a deepfake of him.

The website hosted audio of a man talking to a woman about having sex with her and her 13-year-old daughter, the Nevada Independent reported. Flippo denied any involvement.

“If you ask the average American what the worst thing that could happen with AI is, you will get an answer out of ‘The Terminator’ movie,” Obernolte said during his keynote. “We know in this room that is not a realistic risk, but we also know that there are equally consequential risks that do exist.” He instead laid out a series of threats: the spread of misinformation, cybertheft, “nonconsensual intimate imagery” and autonomous weapons.

With the spread of AI, Obernolte says he believes the country is now at a crossroads. The United States could either follow the European Union — which passed a comprehensive, 144-page act regarding AI, creating a “parallel bureaucracy” to deal with the technology, according to Obernolte — or make its own way, regulating incrementally and by sector.

While he prefers the latter, Obernolte’s task force is aiming to release a proposal on how the federal government should tackle AI regulation by the end of the year. Even before the task force was created, Obernolte said he told House Speaker Mike Johnson that he wanted the group to be “broadly bipartisan.”

“What’s going to lead to the successful adoption of AI is if we can craft a federal regulatory framework that doesn’t change every time the political winds shift one way or the other,” Obernolte said.

But for those on the election integrity panel, their concerns could come to fruition before the task force finishes its work.

Morrell, who worked as an election official for nearly a decade, now runs The Elections Group, an organization that consults election officials on how to improve the voting process and provides direct support for jurisdictions. She emphasized that AI could make disinformation that election officials were already facing even more intense.

“The democratization of disinformation that AI brings — meaning the ability to produce, very quickly, images and videos and audio — has a lot of folks worried that, even without AI, we’re struggling to stay ahead,” Morrell said.

Moran, who was in Congress for over 20 years, discussed the effect of social media algorithms, which he believes silo people further into their own beliefs. He said this online polarization is creating two distinct, separate political camps in the country. Over his time in elected office, he said he saw it progressively get worse.

Nonnecke, the founding director of the CITRIS Policy Lab, said she wanted people to remember that AI was a tool that could be both used to increase participation in democracy and weaponized by bad actors.

“I believe that if we get our job of regulating right, AI will be the next big impetus that leads to not just an increase in worker productivity, but an increase in the size of our economy and a rising wave of prosperity that literally lifts all of those in America,” Obernolte said. “So that’s why we need to do our job of regulating it appropriately.”

[email protected] / 702-990-8923 / @Kyle_Chouinard