The U.S. military has reportedly relied on the artificial-intelligence system Claude—developed by Anthropic—to assist in its recent military operations targeting Iran, even as the U.S. government escalates a bitter dispute with the company over how the technology can be used in warfare. Reports indicate the AI has been integrated into intelligence and targeting workflows, helping analyze surveillance data, simulate battlefield scenarios, and prioritize targets during strikes conducted alongside allied forces. The controversy stems from Anthropic’s insistence on maintaining strict guardrails that prohibit the use of its systems for domestic surveillance or fully autonomous lethal weapons—restrictions the Pentagon has pushed to loosen. After the company refused to remove those safeguards, federal officials designated it a potential supply-chain risk and ordered agencies to phase out its technology, though the Pentagon continues to rely on the AI because it is deeply embedded in operational systems. The clash underscores the rapidly accelerating role of artificial intelligence in modern warfare and raises questions about whether Silicon Valley companies or national-security officials should ultimately set the boundaries for how these powerful tools are used on the battlefield.
Sources
https://www.semafor.com/article/03/04/2026/us-military-is-using-claude-in-iran-amid-anthropic-feud
https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-s/
https://www.thenationalnews.com/future/technology/2026/03/04/anthropic-iran-strikes-ai-trump/
Key Takeaways
- Artificial intelligence is now directly integrated into military targeting workflows, enabling faster intelligence analysis and strike planning during combat operations.
- A major dispute has erupted between the U.S. government and AI developer Anthropic over safeguards that restrict the technology’s use for surveillance or autonomous weapons.
- Despite being ordered phased out of federal use, the AI system remains embedded in military infrastructure, highlighting the growing dependency of modern defense operations on advanced AI tools.
In-Depth
The unfolding confrontation between the U.S. government and the artificial-intelligence company behind Claude illustrates a fundamental shift in how modern warfare is conducted. Military planners have increasingly turned to advanced machine-learning systems to process enormous volumes of surveillance data gathered from satellites, drones, signals intercepts, and other intelligence sources. In the campaign involving Iran, reports indicate that Claude has been used to assist analysts with tasks such as intelligence assessments, identifying potential targets, and simulating battlefield scenarios before strikes are executed. The goal is simple: compress the military “kill chain” so decisions that once took hours or days can occur in minutes.
The speed advantage offered by AI is obvious to defense planners. Tools like Claude can rapidly sift through data streams that would overwhelm human analysts. By highlighting potential threats or prioritizing targets, these systems help commanders allocate aircraft, missiles, and other resources more efficiently. In practical terms, that means a military operation can strike more targets faster, while theoretically keeping human decision-makers in the loop before weapons are deployed.
Yet the same technological leap is also sparking intense political and ethical debate. Anthropic designed its AI with strict safeguards intended to prevent certain uses, including mass domestic surveillance or the creation of fully autonomous weapons systems capable of firing without human approval. Those guardrails collided directly with the Pentagon’s desire for maximum flexibility in deploying AI for national-security missions. When the company refused to remove those restrictions, tensions escalated sharply.
Federal officials responded by designating Anthropic as a potential supply-chain risk and ordering agencies to phase out its systems over time. However, the reality of modern military infrastructure complicates that directive. Claude and similar tools are already integrated into existing intelligence programs and operational workflows. Removing them overnight could disrupt systems that analysts and commanders now rely upon.
The episode reveals a broader trend that will likely shape national security for decades: artificial intelligence is rapidly becoming as central to warfare as aircraft carriers, satellites, or cyber capabilities. While Silicon Valley firms may design the technology, once it becomes embedded in military systems, governments are unlikely to surrender the advantages it provides. The current dispute therefore represents more than a corporate feud—it is an early battle in a larger struggle over who controls the future of AI-driven warfare and how far the United States will go to maintain technological superiority over its adversaries.

