Elon Musk, speaking outside the “AI Insight Forum” in Washington on Wednesday. He and other tech leaders were discussing artificial intelligence and possible government regulation.
Photo by Nathan Howard/Getty Images
Some of the world’s biggest tech leaders gathered in Washington, DC for a closed-door forum on AI.
Elon Musk, Mark Zuckerberg, Bill Gates, and other tech leaders all were scheduled to attend.
Musk described the forum as “a very civilized discussion” among some of the world’s smartest people.
Senate Majority Leader Chuck Schumer has been talking for months about accomplishing a potentially impossible task: passing bipartisan legislation within the next year that encourages the rapid development of artificial intelligence and mitigates its biggest risks.
On Wednesday, he convened a meeting of some of the country’s most prominent technology executives, among others, to ask them how Congress should do it.
The closed-door forum on Capitol Hill included almost two dozen tech executives, tech advocates, civil rights groups and labor leaders. The guest list featured some of the industry’s biggest names: Meta’s Mark Zuckerberg and Tesla and X’s Elon Musk, and OpenAI’s Sam Altman as well as former Microsoft CEO Bill Gates, who published 3,000-word blog post on the implications of AI in July. All 100 senators were invited; the public was not.
“It was a very civilized discussion among some of the smartest people in the world,” Musk said after leaving the meeting. He said there is clearly some strong consensus, noting that nearly everyone raised their hands after Schumer asked if they believed some regulation is needed.
Schumer, who was leading the forum with Sen. Mike Rounds, R-S.D., won’t necessarily take the tech executives’ advice as he works with colleagues to try and ensure some oversight of the burgeoning sector. But he is hoping they will give senators some realistic direction for meaningful regulation of the tech industry.
Tech leaders outlined their views, with each participant getting three minutes to speak on a topic of their choosing.
Musk and former Google CEO Eric Schmidt raised existential risks posed by AI, Zuckerberg brought up the question of closed vs. “open source” AI models, and IBM CEO Arvind Krishna expressed opposition to the licensing approach favored by other companies, according to a person in attendance.
There appeared to be broad support for some kind of independent assessments of AI systems, according to this person, who spoke on condition of anonymity due to the rules of the closed-door forum.
Still, some senators were critical of the private meeting, arguing that tech executives should testify in public.
Sen. Josh Hawley, R-Mo., said he would not attend what he said was a “giant cocktail party for big tech.” Hawley has introduced legislation with Sen. Richard Blumenthal, D-Conn., to require tech companies to seek licenses for high-risk AI systems.
“I don’t know why we would invite all the biggest monopolists in the world to come and give Congress tips on how to help them make more money and then close it to the public,” Hawley said.
Congress has a limited track record when it comes to regulating technology, and AI specifically has grown mostly unchecked by government regulations so far.
In the US, major tech companies have expressed support for AI regulations, though they don’t necessarily agree on what that means. Microsoft has endorsed the licensing approach, for instance, while IBM prefers rules that govern the deployment of specific risky uses of AI rather than the technology itself.
Similarly, many members of Congress agree that legislation is needed but there is little consensus. There is also division, with some members of Congress worrying more about overregulation while others are concerned more about the potential risks.
For many lawmakers and the people they represent, AI’s effects on employment and navigating a flood of AI-generated misinformation are the most immediate worries.