Debunking Myths: Risks of Open-Source AI vs. Closed Systems

Published On Wed Jun 11 2025
Debunking Myths: Risks of Open-Source AI vs. Closed Systems

Risks of Open Source AI Also Exist in Closed Systems: Panelists

Public policy and industry experts are calling for forward-looking AI regulations. Many of the risks associated with open-source AI also exist in closed systems, panelists at a Georgetown University event said Monday, challenging the common perception that openness inherently makes AI more dangerous. The discussion hosted by Georgetown’s Center for Business and Public Policy brought together academics, policy experts, and industry voices to explore how the design and accessibility of AI systems, whether open-source or proprietary, could shape innovation, governance, and global competitiveness.

Open AI Models vs. Closed Models

Open AI models are publicly available and can be inspected, modified, or reused by anyone. Closed models, on the other hand, are proprietary systems controlled by companies like OpenAI, Google, or Anthropic. "In some ways, the closed AI is a bigger risk simply because of the massive distribution," said Mike Linksvayer, vice president of developer policy at GitHub. "There’s a whole range of actors – from the scariest state-sponsored actors – who have access to their own capabilities, so open models don’t add anything marginal to that." A closed system with a nice user interface that’s cheap is the biggest risk there," Linksvayer emphasized.

Optimism and Challenges in Open-Source AI

Throughout the event, panelists expressed optimism about the potential of open-source AI, while noting that many of the risks associated with open AI are present with closed AI as well. "Frankly, the cat’s already out of the bag, it's too late," Nagle said, emphasizing open-source AI is already widespread and can't be rolled back. "The real question is, do we want to be at the cutting edge," Nagel remarked. "Today in AI and data analytics, open source is the leading edge and has been for a decade. Therefore, outlawing open source AI in the US would be a massive competitive disadvantage and would give our competitors and our allies a big step up in the race to be at that cutting edge."

Generative AI in Healthcare: Use Cases, Benefits, Challenges of AI

Policy Recommendations

The panelists urged policymakers to be ready to act quickly to new developments in the industry. "This is a really fast-moving field, so the promptness of addressing challenges associated with AI use, open or closed, is important," said Jessie Wang, economist and professor of policy analysis at RAND School of Public Policy. "At the same time, the openness of the model means once this material is out there, it’s really difficult to call back so it’s more important that we think ahead of time about our response." "A model comes every day, if not every hour," said Karl Zhao, generative AI consultant at DeepSeek. "That really makes tracking it more difficult, so I think it’s key to have a certain set of evaluation tools where you can really filter out the good and the bad."

Importance of Governance

"The most important thing policymakers can do in policy to prepare for AI including its benefits and risks is just improve governance generally," Linksvayer said. "Getting the boring stuff right to create inclusive good governance, that is actually across all societies what’s going to produce good outcomes from broad diffusion of AI capabilities whether they’re open or closed." "I don’t think openness per se is the problem," Wang said. "I think it’s the risk, uncertainty, and unintended consequences that come with it." "Careful policy design can help us get to that sweet spot where we have a good balance between the two," Wang said.

Open-Source and Closed-Source LLMs: A Detailed Comparison