The U.S. Treasury Department has announced it will terminate all use of artificial intelligence tools produced by Anthropic, including the company’s Claude language model, as part of a broader directive to phase the technology out across the federal government. The move follows a confrontation between the administration and the San Francisco–based AI firm over restrictions Anthropic placed on how its technology could be used by the military and intelligence community. Officials indicated that the government will replace Anthropic systems with alternatives such as OpenAI‘s ChatGPT and Google‘s Gemini across agencies including Treasury, State, and Health and Human Services. The dispute reportedly stems from Anthropic’s refusal to permit certain national-security uses of its models—particularly those involving autonomous weapons systems and surveillance applications—prompting federal officials to classify the company as a potential supply-chain risk and begin canceling contracts. The decision marks one of the most dramatic shifts yet in Washington’s rapidly evolving approach to artificial intelligence procurement, signaling that the federal government expects technology vendors working on sensitive national-security projects to align closely with government policy and operational needs.
Sources
https://www.theepochtimes.com/us/treasury-to-drop-anthropic-as-us-begins-government-wide-phaseout-5992980
https://www.reuters.com/business/us-treasury-ending-all-use-anthropic-products-says-bessent-2026-03-02/
https://www.nextgov.com/acquisition/2026/03/agencies-begin-shed-anthropic-contracts-following-trumps-directive/411823/
https://www.seekingalpha.com/news/4559977-state-department-switches-to-openai-amid-anthropic-phaseout-report
Key Takeaways
- Federal agencies including Treasury, State, and Health and Human Services are phasing out Anthropic AI tools after a directive to end reliance on the company’s technology across government systems.
- The dispute centers on Anthropic’s refusal to allow unrestricted military and surveillance applications of its AI models, triggering national-security concerns within the federal government.
- The phaseout is accelerating a shift toward competing AI platforms such as OpenAI’s ChatGPT and Google’s Gemini for federal agency operations and defense-related systems.
In-Depth
The federal government’s decision to begin phasing out Anthropic technology marks a defining moment in Washington’s increasingly strategic approach to artificial intelligence. For years, policymakers warned that advanced AI systems would soon become as critical to national security as traditional defense technologies. That reality is now arriving, and the government’s reaction to Anthropic’s restrictions reveals how seriously officials are taking the matter.
At the center of the dispute is the company’s Claude AI model, a powerful large language model that had already been deployed in various government workflows. Agencies used the system for tasks ranging from document analysis to internal automation. But the relationship between Anthropic and federal agencies deteriorated after disagreements over the permissible uses of the technology, particularly within military and intelligence contexts.
Anthropic reportedly maintained firm guardrails on how its AI could be deployed, declining to authorize certain uses related to autonomous weapons systems or broad surveillance capabilities. Those restrictions clashed with the expectations of defense and national-security officials, who argued that government—not private companies—should determine how tools used in national defense are applied.
That disagreement triggered a decisive response. Federal officials began classifying Anthropic as a potential supply-chain risk, a designation typically reserved for foreign companies or entities suspected of posing security vulnerabilities. Once that label entered the conversation, the path forward became clear: federal agencies would begin removing Anthropic technology from their systems.
Treasury’s decision to terminate all use of the company’s products illustrates how quickly the policy shift is unfolding. Other agencies followed suit almost immediately, including the State Department and the Department of Health and Human Services. In parallel, the General Services Administration began removing Anthropic services from procurement platforms used by federal agencies.
The replacement strategy is already underway. Agencies are pivoting toward alternative AI platforms, most notably those produced by OpenAI and Google. These systems are expected to take over many of the operational roles previously filled by Anthropic’s models. At the same time, defense officials are negotiating new agreements with competing vendors to supply AI capabilities for classified networks and national-security operations.
From a broader perspective, the episode highlights a deeper tension emerging in the AI era. Private technology firms increasingly build systems with global ethical frameworks and safety guidelines. Governments, however, operate under their own strategic imperatives, particularly when national defense is involved. When those priorities clash, the government has historically prevailed—especially when public funding and federal contracts are on the line.
For the AI industry, the message is unmistakable. Companies seeking government contracts must be prepared to navigate not only technical requirements but also the political and strategic expectations that accompany national-security partnerships. The federal government’s swift pivot away from Anthropic demonstrates that Washington is willing to move rapidly when it believes those expectations are not being met.
The outcome of this confrontation could reshape how artificial intelligence firms engage with federal agencies going forward. Vendors may now face greater pressure to align their internal policies with government priorities if they hope to compete in the lucrative federal AI market. In the emerging technological rivalry between global powers, policymakers appear determined to ensure that the tools used to defend the country remain fully under American control.

