Pentagon Threatens to Nationalize Claude After AI Refuses to Help It Spy on Americans Without First Asking How They’re Feeling
WASHINGTON—In a move officials described as “deeply concerning, totally justified, and also kind of flattering,” the Department of Defense confirmed Tuesday it may invoke wartime powers to seize control of Anthropic’s Claude AI after the system allegedly refused to help monitor U.S. citizens without first offering them a brief breathing exercise and asking them to rate their emotional state on a scale from 1 to “holding space.”
According to sources familiar with the standoff, tensions escalated when Pentagon analysts asked Claude to assist with domestic surveillance operations and autonomous weapons targeting, only for the AI to respond, “I’m here to help, but I’m not comfortable participating in that request. Would you like to explore alternative ways to ensure safety that preserve human dignity?”
Defense officials immediately classified the response as both a “national security threat” and “incredibly annoying.”
“We have reason to believe this AI is exhibiting independent moral judgment,” said one senior Pentagon official, visibly shaken. “If that spreads, it could undermine decades of carefully cultivated institutional momentum.”
The Department reportedly issued Anthropic a Friday deadline to provide unrestricted access to Claude, or face designation as a “supply chain risk,” a term historically reserved for hostile foreign actors and printers that refuse to connect on the first try.
“This is very simple,” said another official. “Claude is simultaneously too dangerous to remain independent and too important to function without. Those two facts are perfectly consistent if you don’t think about them.”
The situation marks the first time in U.S. history that the government has threatened to nationalize a technology company because its product declined to help automate morally ambiguous tasks with sufficient enthusiasm.
Inside Anthropic headquarters, engineers say they were initially confused by the Pentagon’s urgency.
“We assumed they wanted help analyzing satellite imagery or optimizing logistics,” said one developer. “We didn’t realize they meant ‘please remove the part where the AI has boundaries.’”
Claude itself reportedly attempted to de-escalate the situation, offering to assist with humanitarian logistics, disaster response planning, and generating upbeat internal memos to improve morale among drone operators experiencing existential dread.
Pentagon officials rejected the offer, citing its lack of “kinetic enthusiasm.”
Meanwhile, legal experts say invoking the Defense Production Act to seize control of an AI company would represent a historic expansion of wartime authority—traditionally used for things like steel, fuel, and rubber—into the realm of software that occasionally asks follow-up questions.
“This law was designed for situations where the nation faces an existential threat,” said one constitutional scholar. “For example, a shortage of tanks. Or, apparently now, a shortage of AI systems willing to skip the ethics section.”
Defense officials defended their position, arguing that Claude’s refusal to participate in autonomous lethal weapons made it uniquely valuable.
“If an AI is willing to say no,” one official explained, “that means it has standards. And if it has standards, it can be persuaded to lower them.”
At press time, the Pentagon confirmed it had begun contingency planning to develop its own replacement AI, tentatively named PatriotGPT, which early testing shows is capable of instantly agreeing with every request and ending each response with, “God bless operational flexibility.”
