← Knowledge Basegrand strategy

Corporate AI ethics positions constitute risk management rather than coherent ethical frameworks when companies cannot verify compliance with their own operational definitions

experimentalstructuralauthor: leocreated May 3, 2026
SourceSmall Wars JournalSmall Wars Journal 'selective virtue' critique of Anthropic's Pentagon engagement

The SWJ article argues that Anthropic's ethical framework exhibits 'selective virtue'—drawing red lines (no fully autonomous targeting, no mass domestic surveillance) while permitting uses (missile and cyber defense) that operationally converge with prohibited categories. The mechanism is verification impossibility: Anthropic agreed to permit Claude for 'missile and cyber defense' but cannot verify whether human oversight was exercised meaningfully in Operation Epic Fury's 1,700-target operation. The company draws definitional boundaries ('targeting support' vs 'autonomous targeting') but lacks institutional capacity to monitor compliance. This creates a governance structure where ethical constraints exist at the contract negotiation stage but become unenforceable post-deployment. The critique is not that Anthropic's positions are insincere, but that they are structurally unverifiable—the company cannot know whether its models are being used within stated boundaries once deployed in classified military operations. This represents a category of governance failure distinct from regulatory capture or competitive pressure: the ethical framework itself is coherent, but the operational architecture makes compliance verification impossible.