Anthropic, a prominent AI startup, recently achieved a significant legal victory when a federal judge in San Francisco granted a preliminary injunction. This order temporarily prevents the Trump administration from enforcing actions that blacklisted the company and limited federal agencies' capabilities to utilize its AI models, known as Claude.
In her ruling, US District Judge Rita Lin indicated that Anthropic demonstrated a strong likelihood of success regarding key aspects of its case. The judge noted that the government's actions seemed more retaliatory than driven by legitimate security concerns. This legal decision highlights potential First Amendment violations, as it allegedly punishes Anthropic for raising awareness about the government’s contracting policies.
Currently, the injunction prohibits the administration from executing or enforcing directives issued by President Trump that target Anthropic. It also halts efforts by the Pentagon to classify the company as a national security risk. The ruling remains on hold for seven days to allow for a potential government appeal.
The legal dispute traces back to Anthropic's refusal to lift certain safety restrictions surrounding its Claude models during negotiations with the Pentagon. The company has made it clear that it will not permit its technology to be used for fully autonomous weapons without human oversight or for the mass surveillance of U.S. citizens, while still being open to broader collaborations with the government.
In response to Anthropic's stance, President Trump acted in late February to instruct federal agencies to discontinue the use of Anthropic’s technology. Furthermore, Defense Secretary Pete Hegseth categorized the company as a supply chain risk, a labeling that could compel defense contractors to steer clear of Anthropic's models in military applications. This classification marks a notable first in how such a label has been publicly assigned to an American company.
The stakes in this case are considerable, given Anthropic's importance as an AI provider for the U.S. government. The company has engaged in a $200 million contract with the Pentagon and has previously deployed its models within classified Defense Department systems. However, the relationship faltered due to disputes over usage terms related to AI technology.
Adding complexity to this situation, the Trump administration has utilized different legal frameworks for the Pentagon's blacklisting actions and broader federal procurement restrictions. This dual approach has prompted Anthropic to seek legal challenges across various courts. A separate case associated with civilian government contracting is also in progress in Washington.