OpenAI agreement with DoD for classified network
United States
Artificial Intelligence
Government Policy
Ethics in Technology
2 min read
Updated By: History Editorial Network (HEN)
Published:
Updated:
OpenAI's agreement with the Department of Defense (DoD) marked a pivotal moment in the intersection of artificial intelligence and national security. Following the refusal of OpenAI's competitor, Anthropic, to allow the DoD to utilize its AI systems for purposes such as mass surveillance or autonomous weapons, the DoD categorized this refusal as a 'supply-chain risk.' In response, OpenAI announced its own agreement with the DoD to deploy its AI models within the classified network of the government. This agreement was framed by OpenAI's CEO, Sam Altman, as a commitment to safety, emphasizing that it included prohibitions against domestic mass surveillance and mandated human oversight in the use of force, particularly concerning autonomous weapon systems. However, the agreement lacked legally binding measures to enforce these prohibitions, raising concerns among critics about the potential for misuse of AI technologies in surveillance and military applications.
#mooflife
#MomentOfLife
#Openai
#Dod
#AiAgreement
#NationalSecurity
#AutonomousWeapons
Primary Reference
Our agreement with the Department of War Listen to article 7:56
