In a significant legal setback for the Trump administration, a U.S. federal court has temporarily blocked a government ban on the use of artificial intelligence technology developed by San Francisco‑based AI company Anthropic PBC. The preliminary injunction, issued by U.S. District Judge Rita F. Lin, pauses restrictions that would have severed Anthropic’s AI tools from federal use while the broader legal dispute proceeds.
The ruling stems from a lawsuit filed by Anthropic challenging a move by the U.S. Department of Defense and the broader administration to designate the company a “supply chain risk,” a label that effectively barred its AI models — notably its Claude system — from use by federal and military agencies. Anthropic argued that the designation and related ban could cost the company billions in lost revenue and was punitive rather than grounded in established law.
Judge Lin’s order, issued in federal court in San Francisco, forbids enforcement of the ban for at least seven days to allow the government to seek an appeal. In her written statement, Lin expressed skepticism about the government’s rationale, noting that the restrictions did not appear to directly serve national security interests and could represent unconstitutional retaliation.
Also Read: Tehran’s Oil Revenue Hits $139 Million Daily Despite Strait Crisis
The roots of the dispute trace back to early 2026, when tensions escalated between Anthropic and the Department of Defense after the company refused to remove contractual restrictions on its AI technology that would have permitted unrestricted military use, including for surveillance and fully autonomous weapons. The Pentagon’s designation of Anthropic as a supply chain risk prompted the administration to order all federal agencies to cease using its tools.
Anthropic’s lawsuit argues that the government’s actions violated due process and free speech protections, contending that companies must be able to set ethical limits on how their AI models are used. The injunction marks an early victory in that legal battle and could have broader implications for how AI firms negotiate safety safeguards and government contracts.
Industry observers say the case underscores growing legal and ethical tensions over the governance of powerful AI systems and the U.S. government’s role in regulating their use. The decision also highlights the importance of judicial oversight when administrative actions have wide‑ranging commercial and constitutional impacts.
Also Read: Rajasthan Police Bust ‘Munna Bhai’ Gang, Arrest 23 Members