Can the Pentagon Use ChatGPT? OpenAI Won't Answer
The Pentagon and other intelligence organizations plan to use advanced chatbots, like ChatGPT, for their operations. However, the creator of ChatGPT, OpenAI, won't comment on whether they will allow their technology to be used for military or high-risk government decisions.
OpenAI has a public list of ethical lines they won't cross, including military and high-risk government applications. However, some experts are worried about the weakness of self-regulation, allowing companies like OpenAI to appear principled to an AI-nervous public while developing powerful technologies.
The National Geospatial-Intelligence Agency is the nation's premier handler of geospatial intelligence. The agency is interested in using predictive text capabilities of chatbots, like ChatGPT, to aid human analysts in interpreting the world. However, OpenAI has stated that both "military and warfare" and "high-risk government decision-making" applications are forbidden.
It's still unclear whether OpenAI would take the money or not if the military approached them. Some experts warn that their unwillingness to engage the question is concerning, especially since even the tech sector's clearest-stated ethics principles have routinely proven to be an exercise in public relations and little else.
The Pentagon is becoming increasingly interested in machine learning, and chatbots like ChatGPT are desirable. However, it remains to be seen whether OpenAI will allow the militarization of its technology.