Washington: Tech billionaire Elon Musk has stepped into an escalating dispute between artificial intelligence firm Anthropic and the US Department of War, publicly backing the Pentagon’s criticism of the company over restrictions on military use of its AI model, Claude.
The controversy erupted after Anthropic declined to remove certain safeguards governing how its AI systems may be deployed by the military. The disagreement has intensified ahead of a reported Friday 5:01 PM ET deadline set by the Department of War for the company to reconsider its position.
Musk amplifies Pentagon criticism
Musk reposted a message by Under Secretary of War Emil Michael on X, the social media platform owned by him, and wrote: “Anthropic hates Western Civilization.”
Michael had alleged that Anthropic attempted to remove an earlier version of Claude’s “constitution” — a document outlining the ethical principles guiding the AI’s responses — from the internet. He cited a line from the previous version stating: “Choose the response that is least likely to be viewed as harmful or offensive to a non-western cultural tradition of any sort.”
According to Michael, this wording suggested that the AI model was trained to prioritise non-Western sensitivities over Western ones. The implication, he argued, raised concerns about potential bias in high-stakes military applications.
This is not the first time Musk has targeted Anthropic. In recent weeks, he had described the company’s AI as “misanthropic and evil” and criticised its ethical framework, including publicly attacking Anthropic-affiliated philosopher Amanda Askell.
Pentagon accuses Anthropic of misrepresentation
The dispute escalated further after Anthropic CEO Dario Amodei published a blog post outlining what he described as two non-negotiable “red lines”: the company would not permit its AI systems to be used for mass domestic surveillance or for fully autonomous weapons without meaningful human oversight.
Amodei wrote that he “cannot in good conscience” lift these safeguards, even if it results in the loss of government contracts.
Michael responded sharply on X, stating: “Anthropic is lying. The @DeptofWar doesn’t do mass surveillance as that is already illegal.” He added that the Pentagon’s objective was to ensure that military personnel could use AI tools in combat scenarios without needing company-level approval for operational decisions, such as countering drone swarms.
He further accused Amodei of having a “God complex” and suggested that corporate leaders should not dictate battlefield policies.
Pentagon spokesperson Sean Parnell has reportedly warned that failure to comply with the Department of War’s “all lawful purposes” clause could result in termination of the partnership. The department is also said to be considering labelling Anthropic a “supply chain risk” — a designation more commonly associated with adversarial foreign entities.
Core issue: who controls military AI use?
At the heart of the dispute is whether the Pentagon should have unrestricted access to Claude for all lawful defence-related purposes. The Department of War maintains that operational decisions must remain within the military chain of command and that lawful use should not be subject to corporate constraints.
Anthropic, however, argues that certain applications — particularly domestic surveillance and autonomous lethal weapons — pose profound ethical risks. The company has positioned itself as a proponent of “constitutional AI”, a framework that embeds normative principles directly into model behaviour.
The row highlights broader tensions in the rapidly expanding AI industry, where governments are increasingly seeking advanced systems for defence, surveillance and cybersecurity purposes. Companies, meanwhile, are grappling with the reputational and moral implications of military partnerships.
Musk’s competitive angle
Observers note that Musk’s intervention also carries commercial undertones. His own AI venture, xAI, which developed the Grok chatbot, has reportedly agreed to the Pentagon’s “all lawful purposes” framework.
By publicly criticising Anthropic’s stance, Musk appears to be positioning xAI as a more flexible and defence-friendly alternative in a market that could potentially run into billions of dollars globally.
The clash comes amid growing global debate over the governance of AI in warfare, with policymakers across the United States, Europe and Asia weighing regulatory frameworks for autonomous weapons and dual-use technologies.
As the Friday deadline approaches, the outcome of this standoff could shape how private AI firms engage with military clients in the future. Whether Anthropic stands firm or revises its conditions may set a precedent for the balance of power between Silicon Valley ethics and national security imperatives.
Conclusion
The Musk-backed Pentagon criticism of Anthropic underscores a deeper structural conflict: who ultimately controls advanced AI tools when national defence is at stake — elected governments or private developers. With both sides hardening their positions, the coming days may prove decisive not only for Anthropic’s government contracts but also for the evolving relationship between AI innovation and military power.
