Claude v. Compliance: A U.S. judge temporarily blocked the Pentagon from blacklisting the AI firm Anthropic following a dispute over its refusal to allow its technology to be used for surveillance and autonomous weapons.
Claude v. Capitulation: A U.S. judge temporarily blocked the Pentagon from blacklisting the AI firm Anthropic following a dispute over its refusal to allow its technology to be used for surveillance and autonomous weapons. Credit: Associated Press

On March 26, U.S. District Judge Rita Lin issued a ruling calling the Pentagon’s blacklisting of the artificial intelligence (AI) company Anthropic “classic illegal First Amendment retaliation.” In the order, she invoked a word that federal judges rarely use: Orwellian. “Nothing in the governing statute,” she wrote, “supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”  

The government’s own files showed the designation was triggered not by any security assessment but by what an internal Pentagon memo called Anthropic’s “increasingly hostile manner through the press.” The memo referred to CEO Dario Amodei’s public essay explaining why the company would not accept the Pentagon’s terms, and the company’s decision to bring its contract dispute into public view. The security assessment, the ruling found, was completed only after the designation had already been decided. Following Lin’s decision, the Pentagon’s chief technology officer posted on social media that her order contained dozens of factual errors and that the designation would remain in force under a separate statute. The case is far from over. 

To understand why a federal judge chose the word “Orwellian” requires examining the government’s legal brief, filed eight days earlier on March 18, which defends the Pentagon’s decision to designate Anthropic a national security supply chain risk. One sentence is key: The government argued that Anthropic’s refusal to remove ethical restrictions from its products was “conduct, not protected speech,” and that this distinction justified directing every federal agency to terminate its business relationship with the company. 

The government’s position is not entirely without logic, and Anthropic is not without its own complicated record. First off, there are genuine reasons to be uncomfortable with private contractors setting the terms under which the military can use their products. A Turkish arms manufacturer that refuses to allow the resale of its weapons to Greece, a NATO ally, is asserting a private veto over military decision-making regardless of the manufacturer’s stated principles. And second: Anthropic styles itself as a responsible AI developer, one that takes safety and legality seriously while its competitors race ahead. It sees itself as the ethical AI giant. Still, it is voracious both in terms of the electricity required to run it and the intellectual property it devours to become so intelligent. The company settled with authors around the world after allegedly using their books to train its language model without permission or compensation. (Washington Monthly contributors and employees are themselves parties to that settlement, a fact worth naming here.) Anthropic is a company with commercial interests, not a cause, and its record outside this dispute is its own. 

Neither of those caveats, however, justifies what followed. Judge Lin, in unusually direct language, explains why. 

The First Amendment limits the government’s ability to retaliate against a company for its speech. If a company’s ethical commitments—published policies about what its technology will and will not do—are reclassified as conduct rather than speech, then it loses that protection. The government can then reject those commitments, punish the company for holding them, and face no constitutional constraint in doing so. That logic is not limited to artificial intelligence. It applies to any company whose ethical commitments conflict with a government contract term: the encryption standards that technology companies publish, the content moderation policies that social media platforms defend, and the research disclosure practices that biomedical firms follow as a matter of scientific integrity. Each could be deemed a negotiating position that the executive branch can punish through national security designations without legislative authorization or judicial finding. This is a significant and potentially permanent expansion of executive power.  

The last time the government used tools of this nature against domestic institutions for public positions, it was called McCarthyism. 

Anthropic was the first AI company whose technology was deployed across the Pentagon’s classified networks, operating under a $200 million contract signed in July 2025. By September, negotiations regarding Claude’s deployment on the Pentagon’s GenAI.mil platform had stalled over two specific points: First, Anthropic would not allow Claude to conduct mass surveillance of American citizens; and second, it would not allow Claude to be part of a lethal autonomous weapons system capable of selecting and engaging targets without a human being making the final decision. These were not ad-hoc positions. They had been part of Anthropic’s usage policy since its founding in 2021 and had governed its Pentagon relationship for months without incident.

Negotiations continued into early 2026, when Defense Secretary Pete Hegseth gave Anthropic a final deadline of February 27 to accept the Pentagon’s terms or face consequences.  

The Pentagon’s position is that private companies do not have the authority to restrict how the military uses technology in a national security context. When negotiations collapsed, President Trump ordered every federal agency to stop using Anthropic’s technology immediately. Hegseth designated Anthropic a supply chain risk and within 24 hours, OpenAI had signed its own Pentagon deal on terms similar to those Anthropic had refused. OpenAI accepted the Pentagon’s “all lawful use” language that Anthropic had rejected, but required the specific laws governing surveillance and autonomous weapons to be written directly into the contract as constraints. Legal experts noted that the arrangement did not grant OpenAI a free-standing right to prohibit otherwise lawful government use, as Anthropic had sought. OpenAI’s own CEO later called the initial rollout “opportunistic and sloppy,” and amended the contract days later. 

The history of the supply chain risk designation reveals why the Pentagon’s use of it against a San Francisco-based company is unprecedented. It was created in the years following September 11 to address concerns about Chinese telecommunications companies, Huawei and ZTE, whose equipment Washington believed could contain backdoors installed at the direction of the Chinese government or its proxies. The authority addressed one narrow question: Do ties to a foreign government compromise a vendor’s reliability as an American defense partner?  

In the history of the designation’s use, it had never been applied to an American company. In March, it was. There was no allegation of foreign influence, no documented espionage, no technical vulnerability attributed to a foreign state. The predicate was a failed contract negotiation. Hegseth aimed a tool conceived of to combat foreign adversaries at a domestic company for holding positions the executive branch disagreed with—bastardizing that authority into something its statute does not authorize, for a use that Congress did not intend. The executive branch may soon possess a national security instrument with no limiting principle, available for deployment against any American company whose public positions conflict with executive preferences. 

The DOJ’s conduct-not-speech argument deserves to be taken seriously because a court may accept it: The idea of a Pentagon contractor dictating terms to the military regarding its products understandably raises questions. For example, a private company that insisted on the ability to remotely constrain or disable AI behavior during a live military operation would be more than a little problematic.  

On that narrow point, Anthropic CEO Dario Amodei agreed. But, he argued, Anthropic wasn’t dictating use: It was declining to build a product without certain safety limits, the same way a defense contractor might decline to manufacture a particular weapon.  

A second objection to Anthropic’s case is that nobody forced the company to be a government contractor—that it can walk away. That argument would be plausible if the Pentagon chose not to re-up Anthropic’s contract. But why should, say, the Social Security Administration or the National Weather Service be denied the chance to contract with Anthropic because of a narrow dispute over weapons systems? 

Instead, Hegseth designated the company a national security risk, triggering economic contagion across the company’s entire private client base. The label told every company that works with the federal government, or wants to, that holding a position the executive branch dislikes can cost you not just a contract, but your entire business.  

Anthropic trains Claude using a framework it calls Constitutional AI—a detailed normative document written by Scottish philosopher Amanda Askell. The Wall Street Journal wrote that Askell’s job is simply “to teach Claude how to be good.” Constitutional AI works by giving the model a written core of ethical principles to internalize during training, which the model then learns to evaluate and revise its responses against. Legal scholars have argued that forcing Anthropic to remove those constraints would not simply alter a contract term. It would compel the company to build a fundamentally different product, one that encodes the government’s values rather than Anthropic’s. 

The Supreme Court’s 2023 decision in 303 Creative LLC v. Elenis established that the government cannot compel an entity to create expressive content that contradicts its values. If Constitutional AI constitutes expressive content, and there is a serious legal argument that it does, then the government is not declining to contract with Anthropic: It is demanding that Anthropic deliver the Pentagon’s preferred speech as a condition of doing business. Other legal scholars push back, arguing that AI output deployed inside a government military system is conduct, not speech, and that constitutional protection should flow from the user’s rights rather than the creator’s.  

What is not in dispute is that the DOJ’s brief does not engage the 303 Creative questions. When the government declines to address the most relevant recent constitutional ruling in a First Amendment case, it signals that engaging the argument would cost more than avoiding it. That this designation had never been applied to an American company makes the silence more telling. 

The House Un-American Activities Committee once designated individuals and organizations as communist sympathizers, and it didn’t need to prosecute everyone. Rumor, fear, and self-censorship did the coercive work the law could not directly accomplish. Studios stopped hiring blacklisted writers due to reputational risk. The chilling effect was not a symptom of the blacklist. It was the mechanism by which the blacklist worked. 

Likewise, Anthropic warned in its filing that more than a hundred business customers are reviewing their relationships with the company. None of this requires a government order. The designation makes the incentive structure visible, rational actors respond accordingly, and the market for principled AI development shrinks quietly: in product roadmaps, hiring decisions, and papers that get shelved before submission. 

When the case moved into a federal courtroom three weeks after the designation was issued, Judge Lin said from the bench that the government’s actions appear to be “an attempt to cripple Anthropic.” She was specifically concerned, she said, about whether the company was “being punished for criticizing the government’s contracting position.” Lin noted that if the concern was the integrity of the operational chain of command, the Pentagon could simply stop using Claude—and that the broader designation did not appear to address any genuine national security concern.  

When pressed to explain how Anthropic could sabotage military systems, the government’s lawyer suggested that Anthropic might issue a software update that functions as a kill switch if it disagreed with how Claude was being used. He could not say whether Anthropic has that capability. In other words, he was asking a court to ratify a national security finding about a threat the government could not confirm.  

A court filing that emerged the same week made the government’s position harder still: A Pentagon official emailed Amodei on March 4, the day after the designation’s finalization, to say the two sides were “very close” on the issues the government now cites as evidence of a national security threat. If the parties were that close when the designation was made, the security risk label looks less like a good-faith national security determination and more like a lever of retribution. 

Following Lin’s ruling, the designation is paused while the full merits are litigated. That buys Anthropic time but leaves the legal blueprint the government has drawn entirely intact. The conduct-not-speech argument, the repurposed designation authority, the doctrine that principled refusal is re-classifiable as a security threat—none of that goes away with a preliminary ruling. The government will appeal at the appellate level, where deference to executive national security judgment is more common.  

Meanwhile, the Pentagon is still using Claude in active military operations. According to reporting by CBS News and The Wall Street Journal, Claude has been deployed in the ongoing conflict with Iran and in intelligence operations to capture Venezuelan leader Nicolás Maduro. The Pentagon has been given six months to phase the technology out because the systems using Claude cannot be easily transferred to another vendor. So, the government is simultaneously arguing in federal court that Claude represents an unacceptable national security risk while running it in active military operations. 

That contradiction illustrates the weakness of the government’s case. The designation was not about danger. It was about compliance, if not capitulation—a hallmark demand of the Trump era. A federal judge has now found, in a formal written ruling, that the government’s real motive was unlawful retaliation. The Pentagon’s chief technology officer then called her ruling a disgrace and claimed it contained dozens of factual errors, without specifying a single one. The Pentagon referred reporters to his posts rather than issuing its own statement.  

If the government prevails on appeal, every company that limits how its technology can be used will understand exactly what that position is worth. 

Our ideas can save democracy... But we need your help! Donate Now!

Matt Watkins is a Chicago-based public affairs and communications consultant and a Civic Nation Change Collective fellow. He writes the monthly column “Watch Your Language” for The Chronicle of Philanthropy, and his work has appeared in Slate, Governing, and The Progressive, among other publications.