Anthropic secured a significant contract with the Pentagon that allowed access to classified networks, solidifying its role in the defense sector. By November 2024, the company had successfully integrated its AI model, known as Claude, within these networks through a partnership with Palantir. As the contract escalated to $200 million by mid-2025, Claude became vital across various military operations, including intelligence analysis and operational planning.
However, a pivotal moment occurred in January 2026 when Claude played a direct role in a classified military operation aimed at capturing the Venezuelan leader Nicolás Maduro. Anthropic sought clarification from Palantir regarding the use of their technology in this operation. This inquiry, considered standard in the private sector, was met with hostility and labeled as insubordination by the Pentagon.
In late February 2026, a rapid sequence of events unfolded following a request from the Secretary of War for Anthropic to remove all safety measures from Claude. The request included enabling mass domestic surveillance and the use of fully autonomous weapons, with a looming deadline looming over them. Anthropic’s CEO, Dario Amodei, firmly rejected the demands, emphasizing the ethical implications of mass surveillance and the unreliability of current AI technology in life-or-death decisions.
As the deadline passed, President Trump issued an order to halt all federal engagements with Anthropic, declaring it a supply chain risk. This designation, reserved for foreign adversaries like Huawei, marked an unprecedented move against an American firm.
The backlash from users and the broader tech community was swift. Within days, download rates for Anthropic’s Claude skyrocketed, while uninstall rates for OpenAI’s ChatGPT surged dramatically. The discussions surrounding the incident triggered widespread concern about the deployment of unreliable AI in military contexts. Despite the blacklisting, Claude remained actively integrated into military operations due to its established role within classified systems.
As debates concerning the ethical implications of AI in warfare unfolded, it became clear that the military’s insistence on deploying these technologies despite unresolved reliability issues posed significant risks. Anthropic recognized the importance of developing effective oversight and reliability measures before employing AI systems in critical scenarios.
Anthropic's experience illustrates the critical challenges companies face in balancing technological advancement with ethical responsibilities, particularly in national security contexts. As military demand for AI grows, a clearer framework must emerge to ensure these powerful tools are deployed responsibly.