Three-level form governance architecture creates mutually reinforcing accountability absorption through executive mandate, corporate nominal compliance, and legislative information requests
The three-level architecture operates through structural interdependence, not additive failure. Level 1 (Hegseth mandate): Secretary Hegseth's AI strategy memo mandated 'any lawful use' language in ALL DoD AI contracts within 180 days, converting the MAD mechanism into legal compliance requirement and creating affirmative compliance risk for labs attempting safety constraints (Anthropic supply-chain risk designation precedent). Level 2 (Corporate nominal compliance): Google's April 28 classified Pentagon deal includes advisory language ('should not be used for' mass surveillance/autonomous weapons) with contractual government adjustment rights and air-gapped networks preventing vendor monitoring. OpenAI's March contract was amended post-backlash with explicit domestic surveillance prohibition, but EFF analysis identified structural loopholes ('US persons' definitional gaps, foreign intelligence carve-outs). Both labs arrive at identical governance state: nominal safety language, no operational constraint in classified environments. Level 3 (Legislative oversight): Warner senators' March information requests to AI companies acknowledged 'any lawful use standard provides unacceptable reputational risk' (documenting the MAD mechanism Congress observes), set April 3 deadline, received zero public responses, yet all addressed companies signed May 1 seven-company deal without behavioral modification. The vacuum is stable because: (1) Hegseth mandate removes market incentive for voluntary constraint that would give Level 3 leverage; (2) nominal compliance satisfies public accountability that would drive Level 3 action; (3) Level 3 lacks statutory authority to break Level 1-2 dynamic without passing new legislation. Each level absorbs accountability pressure that would compel substantive action at the next level.