What was signed
The US Department of Defense (DoD) approved AWS, Google, Microsoft, NVIDIA, OpenAI, SpaceX, Reflection and Oracle to deploy their models on Impact Level 6 (secret data) and Impact Level 7 (highest classification) networks.
The scope is broad: intelligence analysis, operational planning, operator assistance in command and control systems, processing of large volumes of classified data. The most visible contract: $500M to Scale AI (backed by Meta) to "sift through data and assist in decision-making".
Why Anthropic pulled out
Anthropic had a major falling out with the Pentagon after refusing to grant unlimited access to Claude for "all lawful uses". The company maintains an Acceptable Use Policy that explicitly prohibits certain military cases: weapons development, offensive cyberattacks, mass surveillance without legal framework.
The Pentagon requested waivers for those restrictions. Anthropic refused them. The result: no contract. It is, as far as we know, the first time a frontier AI company has explicitly refused to enter the defense market for alignment reasons.
The DoD's AI use directive of January 12, 2026 "aggressively sought to raise the bar for Military AI Dominance". That required providers willing to ease restrictions. Anthropic was not willing.
The multi-vendor doctrine
A DoD official stated: "we will never again depend on a single AI provider". The statement is a direct response to lessons from the Microsoft-Azure era: when a single vendor concentrates government infrastructure, costs rise and innovation slows.
The new approach distributes load across 8 providers, allows internal competition, and keeps the Chief Digital and AI Office (CDAO) as technical arbiter. The CDAO now reports directly to the Secretary of Defense.
Specific applications
Three work streams are already in motion: aided target recognition (AI-assisted target recognition, especially for counter-drone defense), intelligence analysis (massive processing of intercepted communications, satellite imagery) and operational planning (scenario simulation, logistics optimization).
The C-UAS Close-In Kinetic Defeat Enhancement project is the first with specific public funding for aided target recognition in drone takedown.
Historic budget
The $1.5 trillion Defense budget for FY 2027 is the largest in modern history. The president's priorities: the Golden Dome shield, drones, AI, data infrastructure and the defense industrial base.
The ethical debate
The debate divides the industry. OpenAI lifted its military-use ban in January 2024 and participates actively. Google rescinded its 2018 AI principles (which prohibited weapons) and returned to the defense market. Anthropic holds the line.
For companies building on these models (including VuraOS), the choice of provider implies an implicit ethical stance. It's not trivial: the model that serves an e-commerce chatbot also potentially serves military target analysis.
Conclusion
The May 1 agreement changed the rules: military AI is now an open and competitive market. Anthropic's stance — costly in revenue terms — is the first demonstration that alignment as a commercial differentiator can hold. If it works in the medium term, it could push other frontier labs to take a stand.