A federal judge in San Francisco handed Anthropic a significant legal victory on Thursday, temporarily blocking the Pentagon from labelling the Claude AI maker a “supply chain risk” — a designation that had threatened to cut the company off from hundreds of millions of dollars in government contracts. But lawyers and lobbyists tracking the case are warning that the win may be more fragile than it appears.
U.S. District Judge Rita Lin issued the preliminary injunction, also blocking enforcement of President Donald Trump’s directive ordering all federal agencies to stop using Anthropic’s Claude models. In her 43-page ruling, Lin was direct about the government’s conduct.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” she wrote. She also found the action to be “classic illegal First Amendment retaliation.”
How it got here
The conflict traces back to a contract dispute that went public in February. Anthropic had signed a $200 million deal with the Pentagon in July 2025, but as the company began negotiating Claude’s deployment on the DoD’s GenAI.mil platform in September, talks broke down. The Pentagon wanted unfettered access to Claude across all lawful purposes; Anthropic refused to allow its technology to be used for fully autonomous weapons or domestic mass surveillance.
When negotiations collapsed, Defence Secretary Pete Hegseth declared Anthropic a supply chain risk in late February — a designation never before publicly applied to an American company, and one that required defence contractors, including Amazon, Microsoft, and Palantir, to certify they were not using Claude in any military work. Three deals worth over $180 million that were close to signing fell apart as a direct result, and three contractors either ended or were told to end their work with the company, according to Judge Lin’s order.
Trump escalated further, posting on Truth Social, ordering all federal agencies to “immediately cease” all use of Anthropic’s technology, writing: “WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about.”
Judge Lin found the retaliation clear-cut. She ruled that the Pentagon’s own records showed it had moved against Anthropic because of the company’s “hostile manner through the press” — and that it was only after Anthropic publicly raised concerns about how its technology was being used that the government announced its plan to blacklist the company.
Why the Celebration May Be Premature
Despite the ruling, legal experts caution that Anthropic is not out of the woods.
The core problem is structural. The Trump administration relied on two separate statutes to justify the supply chain designation — and one of them, 41 USC 4713, can only be adjudicated in the D.C. Circuit Court of Appeals. Unless that court also issues an injunction, the designation against Anthropic remains in place while judges decide the merits, a process that could take months or years.
“Practically speaking, not that much has changed on the supply chain designation for Anthropic due to this preliminary injunction. I think a lot of the public reaction to this is premature, and doesn’t reflect an understanding of the actual situation.”
— Charlie Bullock, Lawyer & Senior Research Fellow, Institute For Law & AI, Boston, Massachusetts
Two of the three judges on the D.C. Circuit panel — Gregory Katsas and Neomi Rao — were appointed by President Trump. Both have taken an expansive view of executive power in national security matters. Bullock said it is “likely, in fact,” that they will rule differently from Judge Lin, given the “capacious language about determinations related to national security” in the statute at issue.
“After yesterday’s ruling, at least one of the supply chain risk designations is gone. But for Anthropic, from a business perspective, you need both of them gone before it actually helps you.”
— Saif Khan, Former National Security Official, Biden Administration; Fellow at the Institute of Progress, Washington, DC
Industry groups echoed the concern. Paul Lekas, head of global public policy at the Software and Information Industry Association, said that while the court’s “reasoned opinion provides a degree of comfort for the business community, the legal process is far from over.” He noted that “a cloud remains over the business community” as long as the appeals are pending.
In the immediate wake of Judge Lin’s ruling, top Pentagon official Emil Michael insisted publicly that the supply chain risk designation against Anthropic remains in place — underscoring the limits of Thursday’s win.
What comes next
Judge Lin paused her ruling for one week to give the Trump administration time to seek relief from an appeals court. She made clear that her order does not require the Pentagon to use Anthropic’s products, nor does it prevent the DoD from transitioning to a different AI vendor entirely.
Anthropic’s lawyers, for their part, sent a letter to the D.C. Circuit on Thursday citing Lin’s decision as further grounds to grant a stay in the parallel case. Whether that panel agrees will determine whether this week’s ruling is the turning point — or merely a pause in a much longer battle.
“This is really unpredictable,” said Saif Khan. “So it’s a frustrating situation for Anthropic.”
Most observers believe Anthropic will ultimately prevail on the merits.
“We think in the end the courts will agree that the Department has not made a showing to support its extraordinary claim that Anthropic poses a supply chain risk,” said Lekas.
But such a ruling may be cold comfort if it arrives after months of carrying the supply chain risk label — and the revenue damage that comes with it.




