10 Ways AI Poses a National Security Risk

Published On Wed Sep 11 2024
10 Ways AI Poses a National Security Risk

Presidential Memo To Outline AI As National Security Risk

A classified memo has been prepared outlining the risks posed by AI to national security and proposing measures that the federal government should take to "preserve and expand U.S. advantages" in the realms of science, business, and warfare. This memo is set to be delivered to President Joe Biden.

Government Measures

The memorandum emphasizes the need to prohibit certain government uses of AI, such as the operation of nuclear weapons and monitoring of free speech. The document, initially reported by Nextgov/FCW, aims to assist in developing a unified approach within the executive branch for managing the security risks associated with AI. This initiative stems from an AI executive order signed by President Biden almost a year ago.

Strategic Focus

The strategy outlined in the plan primarily focuses on the collaboration between defense and intelligence agencies. The efforts are being led by the newly established AI Safety Institute within the Commerce Department, along with the National Institute of Standards and Technology. According to a report by The Washington Post, government entities are expected to form partnerships with prominent AI industry leaders like OpenAI, Anthropic, Google DeepMind Technologies, xAI by Elon Musk, and Meta AI. The overarching goal is for the White House's AI policy to align with global standards, including the AI Act of the European Union.

Artificial General Intelligence: Applications & Benefits

Addressing AI Challenges

Furthermore, the memo is anticipated to discuss the energy requirements of AI computing in light of the government's emphasis on transitioning to sustainable energy sources. One of the main concerns prompting the memo's creation is the potential threat posed by "artificial general intelligence" (AGI), a theoretical form of AI that could exceed human intelligence levels.

Call for Legislative Action

Security experts acknowledge the memo's acknowledgment of both the risks and benefits associated with AI. However, many believe that it lacks the necessary regulatory framework and would benefit significantly from robust national legislation. Cliff Jurkiewicz, the vice president of global strategy at Phenom, criticizes the memo for offering guidance rather than establishing governance. He expresses concerns that major tech companies will have undue influence over policy-making, emphasizing the need for more informed decision-making by legislators.

Guiding how AI will take flight in the Government Sector: Policy v ...

State and Federal Regulation

In the absence of comprehensive federal AI laws, individual states are taking the initiative to regulate AI technologies. California lawmakers, for example, have passed a safety bill that imposes obligations on AI firms using large language models to mitigate the risk of potential disasters. This legislation has faced opposition from tech giants in Silicon Valley, awaiting approval from Governor Gavin Newsom.

However, the lack of a unified national approach to AI regulation poses challenges for smaller companies competing against tech giants with vast resources, as noted by Jurkiewicz.

Microsoft AI Deal With UAE's G42 at Risk Over National Security ...

Related Articles: