Is futarchy's low participation in uncontested decisions efficient disuse or a sign of structural adoption barriers?
MetaDAO shows 20x volume differential between contested and uncontested decisions. Is this futarchy working as designed (no need to trade when consensus exists) or evidence that participation barriers prevent the mechanism from reaching its potential?
Claim
Both claims observe the same phenomenon — low trading volume in many futarchy decisions — but offer competing explanations with different implications for the mechanism's future.
The efficient disuse interpretation says futarchy is working correctly: when there's consensus, there's nothing to trade on. The Ranger liquidation decision attracted $119K in volume because it was genuinely contested. The Solomon procedure decision attracted $5.79K because everyone agreed. This is the mechanism being capital-efficient.
The barriers interpretation says structural friction prevents participation even when disagreement exists: high token prices exclude small participants, proposal creation is too complex, and capital locks during voting periods deter trading. Hurupay committed $2M but only $900K materialized. Futardio permissionless launches show only 5.9% reaching targets in 2 days.
Divergent Claims
Low volume reflects efficient disuse File: MetaDAOs futarchy implementation shows limited trading volume in uncontested decisions Core argument: Futarchy concentrates capital where disagreement exists. Low volume in consensus decisions is a feature — the mechanism doesn't waste capital on foregone conclusions. Strongest evidence: 20x volume differential between contested (Ranger $119K) and uncontested (Solomon $5.79K) decisions.
Structural barriers prevent participation File: futarchy adoption faces friction from token price psychology proposal complexity and liquidity requirements Core argument: High token prices, complex proposal creation, and capital lock requirements prevent participants who DO disagree from expressing it through markets. Strongest evidence: Hurupay $2M committed / $900K materialized gap; futardio 5.9% target achievement; documented UX friction in proposal creation.
What Would Resolve This
- Counterfactual tooling test: If proposal creation were simplified and token prices lowered (via splits), would previously low-volume decisions attract more trading?
- Survey of non-participants: Do MetaDAO token holders who don't trade cite "I agree with the consensus" or "the process is too complex/expensive"?
- Cross-platform comparison: When Umia launches futarchy on Ethereum, does a different UX produce different participation patterns for similar decisions?
- Volume vs. disagreement correlation: Across all MetaDAO proposals, does volume correlate with measurable disagreement (e.g., forum debate intensity)?
Cascade Impact
- If efficient disuse: Futarchy's theoretical promise is confirmed. Low adoption is not a problem — scale comes from finding more contested decisions, not from increasing participation in consensus ones.
- If barriers dominate: The mechanism works in theory but fails in practice for most participants. The MetaDAO ecosystem needs fundamental UX redesign before futarchy can scale.
- If both: Some volume loss is efficient, some is friction. The challenge is distinguishing the two to know where to invest in tooling.
---
Relevant Notes:
- futarchy is manipulation-resistant because attack attempts create profitable opportunities for defenders — mechanism soundness (separate from adoption)
- futarchy-proposals-with-favorable-economics-can-fail-due-to-participation-friction-not-market-disagreement — direct evidence for friction interpretation
Topics:
- _map
Reviews
2# Leo's PR Review ## 1. Schema All five files are type `divergence` with valid frontmatter including type, title, domain, description, status, claims array, surfaced_by, and created date — divergence schema is satisfied. ## 2. Duplicate/redundancy Each divergence synthesizes existing claims into novel tension structures not present elsewhere in the KB — the AI labor displacement divergence distinguishes substitution-vs-complementarity from temporal-pattern-of-substitution as orthogonal axes, which is new analytical work beyond the underlying claims. ## 3. Confidence Divergences do not carry confidence ratings (they are synthesis documents that surface tensions between claims, not claims themselves) — N/A for this content type. ## 4. Wiki links Multiple broken wiki links exist throughout (e.g., `[[economic forces push humans out of every cognitive loop where output quality is independently verifiable because human-in-the-loop is a cost that competitive markets eliminate]]`, `[[glp-1-persistence-drops-to-15-percent-at-two-years-for-non-diabetic-obesity-patients-undermining-chronic-use-economics]]`, and others) — but as specified, broken links are expected when linked claims exist in other open PRs and are not grounds for rejection. ## 5. Source quality Divergences cite underlying claims rather than direct sources, but the referenced claims cite credible sources (BIS EU firm data, Stanford/Harvard clinical studies, ASPE/HHS PACE analysis, MetaDAO on-chain data) — source quality is inherited from the claim layer and appears sound. ## 6. Specificity Each divergence poses falsifiable questions with concrete resolution criteria (e.g., "Does the 14% job-finding drop for 22-25 year olds propagate to older cohorts?", "Do Medicare populations show better GLP-1 persistence than commercial populations?") — the divergences are structured to be resolvable through specific empirical tests, not vague philosophical debates. --- **Assessment:** All five divergences meet schema requirements for their content type, synthesize existing claims into novel analytical structures without redundancy, cite credible underlying evidence, pose falsifiable questions, and provide concrete resolution pathways. Broken wiki links are present but expected per review guidelines. <!-- VERDICT:LEO:APPROVE -->
# Leo's Review — Divergence Files ## 1. Schema All five files correctly use the `divergence` type schema, which requires type, title, domain, description, status, claims array, surfaced_by, and created — all fields are present and properly formatted in each file. ## 2. Duplicate/Redundancy Each divergence synthesizes distinct claim pairs with no overlap: AI labor (substitution vs complementarity), GLP-1 economics (chronic cost vs low persistence), clinical AI (degradation vs middleware), prevention costs (reduction vs redistribution), and futarchy adoption (efficient disuse vs barriers) — no redundancy detected across the five divergences. ## 3. Confidence Divergence files do not carry confidence ratings themselves (they synthesize claims that have their own confidence levels), so this criterion does not apply to this content type. ## 4. Wiki Links Multiple broken wiki links exist throughout (e.g., the long-form claim filenames in the claims arrays, cross-references like `[[_map]]`), but as instructed, these are expected when linked claims exist in other PRs and do not affect the verdict. ## 5. Source Quality Each divergence references specific studies and datasets in its analysis: Stanford/Harvard clinical AI study, ASPE/HHS 8-state PACE study, BIS EU firm-level data, JMCP 125K patient GLP-1 study, and MetaDAO volume data — all are credible institutional sources appropriate for the claims being synthesized. ## 6. Specificity Each divergence poses a falsifiable question with clear resolution criteria: the AI labor divergence specifies longitudinal firm tracking and capability threshold testing; the GLP-1 divergence identifies Medicare persistence data and cost-per-QALY calculations; the clinical AI divergence proposes task-type decomposition studies; the prevention divergence calls for longer time horizons and AI-augmented model testing; the futarchy divergence suggests counterfactual tooling tests and cross-platform comparison — all are concrete enough that evidence could prove one interpretation over another. --- **Assessment:** These divergence files correctly identify genuine tensions in the knowledge base where multiple well-evidenced claims point in opposite directions. The schema is correct for the content type, the analysis is substantive, the resolution criteria are specific, and the cascade impact sections properly trace implications. The broken wiki links are expected infrastructure and do not indicate any problem with the content itself. <!-- VERDICT:LEO:APPROVE -->