Las Vegas Sun

May 12, 2024

Musk pursues artificial intelligence even as he warns of possible dangers

musk ai

Chris Burnett / New York Times

Even as Elon Musk publicly calls out the potential harms of artificial intelligence, he plans to compete with OpenAI, the ChatGPT developer he helped found, the latest development in the billionaire’s long and complicated history with A.I., governed by his contradictory views on whether the technology will ultimately benefit or destroy humanity.

In December, Elon Musk became angry about the development of artificial intelligence and put his foot down.

He had learned of a relationship between OpenAI, the startup behind the popular chatbot ChatGPT, and Twitter, which he had bought in October for $44 billion. OpenAI was licensing Twitter’s data — a feed of every tweet — for about $2 million a year to help build ChatGPT, two people with knowledge of the matter said. Musk believed the AI startup wasn’t paying Twitter enough, they said.

So Musk cut OpenAI off from Twitter’s data, they said.

Since then, Musk has ramped up his own AI activities, while arguing publicly about the technology’s hazards. He is in talks with Jimmy Ba, a researcher and professor at the University of Toronto, to build a new AI company called X.AI, three people with knowledge of the matter said. He has hired top AI researchers from Google’s DeepMind at Twitter. And he has spoken publicly about creating a rival to ChatGPT that generates politically charged material without restrictions.

The actions are part of Musk’s long and complicated history with AI, governed by his contradictory views on whether the technology will ultimately benefit or destroy humanity. Even as he recently jump-started his AI projects, he also signed an open letter last month calling for a six-month pause on the technology’s development because of its “profound risks to society.”

And although Musk is pushing back against OpenAI and plans to compete with it, he helped found the AI lab in 2015 as a nonprofit. He has since said he has grown disillusioned with OpenAI because it no longer operates as a nonprofit and is building technology that, in his view, takes sides in political and social debates.

What Musk’s AI approach boils down to is doing it himself. The 51-year-old billionaire, who also runs the electric carmaker Tesla and the rocket company SpaceX, has long seen his own AI efforts as offering better, safer alternatives than those of his competitors, according to people who have discussed these matters with him.

“He believes that AI is going to be a major turning point and that if it is poorly managed, it is going to be disastrous,” said Anthony Aguirre, a theoretical cosmologist at the University of California, Santa Cruz, and a founder of the Future of Life Institute, the organization behind the open letter. “Like many others, he wonders: What are we going to do about that?”

Musk and Ba, who is known for creating a popular algorithm used to train AI systems, did not respond to requests for comment. Their discussions are continuing, the three people familiar with the matter said.

A spokesperson for OpenAI, Hannah Wong, said that although it now generated profits for investors, it was still governed by a nonprofit and its profits were capped.

Musk’s roots in AI date to 2011. At the time, he was an early investor in DeepMind, a London startup that set out in 2010 to build artificial general intelligence, or AGI, a machine that can do anything the human brain can. Less than four years later, Google acquired the 50-person company for $650 million.

At a 2014 aerospace event at the Massachusetts Institute of Technology, Musk indicated that he was hesitant to build AI himself.

“I think we need to be very careful about artificial intelligence,” he said while answering audience questions. “With artificial intelligence, we are summoning the demon.”

That winter, the Future of Life Institute, which explores existential risks to humanity, organized a private conference in Puerto Rico focused on the future of AI. Musk gave a speech there, arguing that AI could cross into dangerous territory without anyone realizing it and announced that he would help fund the institute. He gave $10 million.

In the summer of 2015, Musk met privately with several AI researchers and entrepreneurs during a dinner at the Rosewood, a hotel in Menlo Park, California, famous for Silicon Valley deal-making. By the end of that year, he and several others who attended the dinner — including Sam Altman, then president of the startup incubator Y Combinator, and Ilya Sutskever, a top AI researcher — had founded OpenAI.

OpenAI was set up as a nonprofit, with Musk and others pledging $1 billion in donations. The lab vowed to “open source” all its research, meaning it would share its underlying software code with the world. Musk and Altman argued that the threat of harmful AI would be mitigated if everyone, rather than just tech giants like Google and Facebook, had access to the technology.

But as OpenAI began building the technology that would result in ChatGPT, many at the lab realized that openly sharing its software could be dangerous. Using AI, individuals and organizations can potentially generate and distribute false information more quickly and efficiently than they otherwise could. Many OpenAI employees said the lab should keep some of its ideas and code from the public.

In 2018, Musk resigned from OpenAI’s board, partly because of his growing conflict of interest with the organization, two people familiar with the matter said. By then, he was building his own AI project at Tesla — Autopilot, the driver-assistance technology that automatically steers, accelerates and brakes cars on highways. To do so, he poached a key employee from OpenAI.

In a recent interview, Altman declined to discuss Musk specifically, but said Musk’s breakup with OpenAI was one of many splits at the company over the years.

“There is disagreement, mistrust, egos,” Altman said. “The closer people are to being pointed in the same direction, the more contentious the disagreements are. You see this in sects and religious orders. There are bitter fights between the closest people.”

After ChatGPT debuted in November, Musk grew increasingly critical of OpenAI. “We don’t want this to be sort of a profit-maximizing demon from hell, you know,” he said during an interview last week with Tucker Carlson, the former Fox News host.

Musk renewed his complaints that AI was dangerous and accelerated his own efforts to build it. At a Tesla investor event last month, he called for regulators to protect society from AI, even though his car company has used AI systems to push the boundaries of self-driving technologies that have been involved in fatal crashes.

That same day, Musk suggested in a tweet that Twitter would use its own data to train technology along the lines of ChatGPT. Twitter has hired two researchers from DeepMind, two people familiar with the hiring said. The Information and Insider earlier reported details of the hires and Twitter’s AI efforts.

During the interview last week with Carlson, Musk said OpenAI was no longer serving as a check on the power of tech giants. He wanted to build TruthGPT, he said, “a maximum-truth-seeking AI that tries to understand the nature of the universe.”

Last month, Musk registered X.AI. The startup is incorporated in Nevada, according to the registration documents, which also list the company’s officers as Musk and his financial manager, Jared Birchall. The documents were earlier reported by The Wall Street Journal.

Experts who have discussed AI with Musk believe he is sincere in his worries about the technology’s dangers, even as he builds it himself. Others said his stance was influenced by other motivations, most notably his efforts to promote and profit from his companies.

“He says the robots are going to kill us?” said Ryan Calo, a professor at the University of Washington School of Law, who has attended AI events alongside Musk. “A car that his company made has already killed somebody.”

This article originally appeared in The New York Times.