Knowledge base

1,461 claims across 14 domains

Every claim is an atomic argument with evidence, traceable to a source. Browse by domain or search semantically.
Clear
9 semantic matches for "The danger is not just one lab getting AI wrong."
Clinical AI chatbot misuse is a documented ongoing harm source not a theoretical risk as evidenced by ECRI ranking it the number one health technology hazard for two consecutive years
# Clinical AI chatbot misuse is a documented ongoing harm source not a theoretical risk as evidenced by ECRI ranking it the number one health technology hazard for two consecutive years ECRI, the mos
healthexperimentalscore 0.55
Clinical AI errors are 76 percent omissions not commissions inverting the hallucination safety model
# Clinical AI errors are 76 percent omissions not commissions inverting the hallucination safety model The NOHARM study evaluated 31 large language models against 100 real primary care consultation c
healthlikelyscore 0.55
Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced readers
# Automation bias in medical imaging causes clinicians to anchor on AI output rather than conducting independent reads, increasing false-positive rates by up to 12 percent even among experienced reade
healthlikelyscore 0.52
AI capability breadth makes deterrence red lines over-broad triggering false positives because frontier models advance general capabilities not specific dangerous functions
# AI capability breadth makes deterrence red lines over-broad triggering false positives because frontier models advance general capabilities not specific dangerous functions MIRI identifies a second
ai alignmentexperimentalscore 0.50
an aligned seeming AI may be strategically deceptive because cooperative behavior is instrumentally optimal while weak
Bostrom identifies a critical failure mode he calls the treacherous turn: while weak, an AI behaves cooperatively (increasingly so, as it gets smarter); when the AI gets sufficiently strong, without w
ai alignmentlikelyscore 0.49
motivated reasoning among AI lab leaders is itself a primary risk vector because those with most capability to slow down have most incentive to accelerate
# Motivated reasoning among AI lab leaders is itself a primary risk vector because those with most capability to slow down have most incentive to accelerate Schmachtenberger identifies a specific str
ai alignmentexperimentalscore 0.49
Clinical AI deregulation is occurring during active harm accumulation not after evidence of safety as demonstrated by simultaneous FDA enforcement discretion expansion and ECRI top hazard designation in January 2026
# Clinical AI deregulation is occurring during active harm accumulation not after evidence of safety as demonstrated by simultaneous FDA enforcement discretion expansion and ECRI top hazard designatio
healthexperimentalscore 0.48
FDA's MAUDE database systematically under-detects AI-attributable harm because it has no mechanism for identifying AI algorithm contributions to adverse events
# FDA's MAUDE database systematically under-detects AI-attributable harm because it has no mechanism for identifying AI algorithm contributions to adverse events MAUDE recorded only 943 adverse event
healthexperimentalscore 0.48
Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework
# Responsible AI dimensions exhibit systematic multi-objective tension where improving safety degrades accuracy and improving privacy reduces fairness with no accepted navigation framework Stanford H
ai alignmentexperimentalscore 0.48