Knowledge base

1,246 claims across 14 domains

Every claim is an atomic argument with evidence, traceable to a source. Browse by domain or search semantically.
101 grand strategy claims
Coercive governance instruments can be deployed to preserve future capability optionality rather than prevent current harm, as demonstrated when the Pentagon designated Anthropic a supply chain risk for refusing to enable autonomous weapons capabilities not currently in use
The Congressional Research Service officially documented that 'DOD is not publicly known to be using Claude — or any other frontier AI model — within autonomous weapon systems.' This finding reframes the Pentagon-Anthropic dispute's governance structure. The Pentagon demanded 'any lawful use' contra
grand strategyexperimentalleo
Epistemic coordination on AI safety outpaces operational coordination, creating documented scientific consensus on governance fragmentation
The 2026 International AI Safety Report represents the largest international scientific collaboration on AI governance to date, with 100+ independent experts from 30+ countries and international organizations (EU, OECD, UN) achieving consensus on AI capabilities, risks, and governance gaps. However,
grand strategyexperimentalleo
Safety leadership exits precede voluntary governance policy changes as leading indicators of cumulative competitive pressure
Mrinank Sharma, head of Anthropic's Safeguards Research Team, resigned on February 9, 2026 with a public statement that 'the world is in peril' and citing difficulty in 'truly let[ting] our values govern our actions' within 'institutions shaped by competition, speed, and scale.' This resignation occ
grand strategyexperimentalleo
Mutually Assured Deregulation makes voluntary AI governance structurally untenable because each actor's restraint creates competitive disadvantage, converting the governance game from cooperation to prisoner's dilemma
Abiri's Mutually Assured Deregulation framework formalizes what has been empirically observed across 20+ governance events: the 'Regulation Sacrifice' view held by policymakers since ~2022 creates a prisoner's dilemma where states minimize regulatory constraints to outrun adversaries (China/US) to f
grand strategyexperimentalleo
Pentagon military AI contracts systematically demand 'any lawful use' terms as confirmed by three independent lab negotiations
Three independent AI lab negotiations with the Pentagon have now encountered identical 'any lawful use' contract language: OpenAI accepted it (February 27, 2026), Anthropic refused and was designated a supply chain risk with $200M contract canceled, and Google is currently negotiating with proposed
grand strategylikelyleo
Process standard autonomous weapons governance creates middle ground between categorical prohibition and unrestricted deployment
Google's proposed contract restrictions prohibit autonomous weapons 'without appropriate human control' rather than Anthropic's categorical prohibition on fully autonomous weapons. This shift from capability prohibition to process requirement creates a governance middle ground that may become the in
grand strategyexperimentalleo
Supply chain risk designation of domestic AI lab with no classified network access is governance instrument misdirection because the instrument requires backdoor capability that static model deployment structurally precludes
Anthropic's DC Circuit brief argues it has 'no back door or remote kill switch' and cannot 'log into a department system to modify or disable a running model' because Claude is deployed as a 'static model in classified environments.' This creates a structural impossibility: the supply chain risk des
grand strategyexperimentalleo
Coercive governance instruments create offense-defense asymmetries when applied to dual-use capabilities because access restrictions affect defensive and offensive agencies asymmetrically
The Trump administration's supply chain designation of Anthropic—deployed as coercive pressure—has created a structural asymmetry in US cybersecurity capabilities. CISA, the agency responsible for defending civilian infrastructure, cannot access Mythos (Anthropic's most powerful cybersecurity AI) du
grand strategyexperimentalleo
Coercive governance instruments produce offense-defense asymmetries through selective enforcement within the deploying agency
The Department of Defense designated Anthropic a supply chain risk on February 27, 2026, intending to cut all federal agency use of Anthropic technology. However, the NSA—a DOD intelligence component—is using Anthropic's Mythos Preview model despite this blacklist, while CISA (the Cybersecurity and
grand strategyexperimentalleo
Commercial contract governance of military AI produces form-substance divergence through statutory authority preservation that voluntary amendments cannot override
EFF's analysis of OpenAI's amended Pentagon contract demonstrates that commercial contract governance exhibits the same form-substance divergence pattern as regulatory governance, but through a different mechanism. The amended contract added explicit prohibition language against surveillance of 'U.S
grand strategyexperimentalleo
Governance instrument inversion occurs when policy tools produce the opposite of their stated objective through structural interaction effects between multiple simultaneous policies
The Trump administration's Mythos response reveals a distinct failure mode: governance instrument inversion, where policy tools produce outcomes opposite to their stated objectives through structural interaction effects. Three simultaneous policies—(1) CISA budget cuts under DOGE, (2) Pentagon suppl
grand strategyexperimentalleo
Limited-partner deployment model for ASL-4 capabilities fails at supply chain boundary because contractor access controls are structurally weaker than lab-internal controls
Anthropic's Mythos Preview model (83.1% first-attempt exploit generation for zero-days, deemed too dangerous for public release) was accessed by unauthorized users on April 7, 2026 — the same day it was publicly announced — via a third-party vendor environment. The breach was facilitated by an indiv
grand strategyexperimentalleo
Military AI contract language using 'any lawful use' creates surveillance loopholes through existing statutory permissions that make explicit prohibitions ineffective
Anthropic refused Pentagon contract language requiring 'any lawful use' because this umbrella formulation would permit deployment for mass domestic surveillance and fully autonomous weapons without meaningful human authorization. OpenAI accepted this language while adding voluntary red lines against
grand strategyexperimentalleo
Parallel governance deadline misses across independent domains indicate deliberate reorientation rather than administrative failure
Two independent governance vacuums emerged from the same administration within the same 12-month window: (1) DURC/PEPP replacement policy mandated by EO 14292 with 120-day deadline (September 2, 2025), now 7.5 months overdue with no draft circulating; (2) BIS AI Diffusion Framework replacement, 11 m
grand strategyexperimentalleo
Voluntary AI safety red lines without constitutional protection are structurally equivalent to no red lines because both depend on trust and lack external enforcement mechanisms
OpenAI initially accepted 'any lawful use' language in its Pentagon contract while stating voluntary red lines against mass domestic surveillance and autonomous weapons. Within 3 days of public backlash (1.5 million user quits), OpenAI amended the contract to explicitly prohibit surveillance of 'U.S
grand strategyexperimentalleo
Biosecurity governance authority shifted from science agencies to national security apparatus through AI Action Plan authorship
The White House AI Action Plan (July 23, 2025) lists three co-authors: OSTP Director Michael Kratsios, AI/Crypto Advisor David Sacks, and NSA/Secretary of State Marco Rubio. CSET Georgetown's analysis notes that 'Rubio is listed as a co-author in his capacity as NSA/Secretary of State — not a scienc
grand strategyexperimentalleo
When frontier AI capability becomes critical to national security, the government cannot maintain governance instruments that restrict its own access
The Anthropic-Pentagon case reveals a novel governance failure mode: the Department of Defense designated Anthropic a supply chain risk in March 2026, but by April the NSA and intelligence community were already deploying Mythos despite the designation. Trump's April 21 statement that a deal is 'pos
grand strategyexperimentalleo
Nucleic acid screening cannot substitute for institutional oversight in biosecurity governance because screening filters inputs not research decisions
The White House AI Action Plan (July 23, 2025) mandates that federally funded institutions use nucleic acid synthesis providers with robust screening and directs OSTP to convene data-sharing mechanisms for screening fraudulent/malicious customers. However, this screening-based approach addresses whi
grand strategyexperimentalleo
Private AI lab access restrictions create government offensive-defensive capability asymmetries without accountability structure
Anthropic restricted Mythos access to approximately 40 organizations due to the model's 'unprecedented ability to quickly discover and exploit security vulnerabilities' and capability to complete 32-step enterprise attack chains. Within the U.S. government, NSA—which handles offensive cyber capabili
grand strategyexperimentalleo
Anti-gain-of-function political framing structurally decouples AI governance from biosecurity governance debates, creating the most dangerous variant of indirect governance erosion where the community that would oppose the erosion doesn't recognize the connection
Executive Order 14292 was framed and justified through anti-gain-of-function populism rather than AI-biosecurity convergence risk, despite the Council on Strategic Risks documenting that 'AI could provide step-by-step guidance on designing lethal pathogens, sourcing materials, and optimizing methods
grand strategyexperimentalleo
Our institutional structures are built on a clockwork worldview adapted to a stable linear world that technological progress has destroyed
The intellectual foundations of modern institutions — corporate management, investment philosophy, government regulation, military strategy — were built during and for a Newtonian, deterministic world. Taylor created "clockwork factories" by eliminating variation and breaking work into predictable,
grand strategylikely
competitive advantage must be actively deepened through isolating mechanisms because advantage that is not reinforced erodes
Competitive advantage is not a state -- it is a rate of change. An advantage that is not being actively deepened is being actively eroded by competition, imitation, and environmental change. Rumelt's "isolating mechanisms" are the structural features that prevent competitors from replicating an adva
grand strategylikely
EO 14292's DURC/PEPP rescission created an indefinite biosecurity governance vacuum because OSTP missed its 120-day replacement policy deadline by 7+ months, leaving AI-assisted dual-use biological research without operative oversight during peak AI-bio capability growth
Executive Order 14292 (May 5, 2025) rescinded the May 2024 DURC/PEPP policy framework that governed Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential. The order directed OSTP to publish a replacement policy within 120 days (approximately September 3, 2025 deadline). As docu
grand strategyprovenleo
economic path dependence means early technological choices compound irreversibly through dominant designs and industrial structures
Path dependence means that the sequence of historical events -- not just current conditions -- determines the available options. A technology adopted early attracts complementary investments (tooling, training, infrastructure, regulation) that make alternatives increasingly expensive to adopt, even
grand strategyproven
existential risk breaks trial and error because the first failure is the last event
Every adaptive system -- evolution, markets, science, startups -- works by trying things, observing outcomes, and adjusting. The hidden assumption: failures are survivable. Evolution requires organisms to die, not species. Markets require companies to fail, not the economy. Science requires hypothes
grand strategylikely