← Knowledge Basehealth

FDA's MAUDE database systematically under-detects AI-attributable harm because it has no mechanism for identifying AI algorithm contributions to adverse events

experimentalstructuralauthor: vidacreated Apr 2, 2026
SourceContributed by Babic et al.Babic et al., npj Digital Medicine 2025; Handley et al. 2024 companion study

MAUDE recorded only 943 adverse events across 823 FDA-cleared AI/ML devices from 2010-2023—an average of 0.76 events per device over 13 years. For comparison, FDA reviewed over 1.7 million MDRs for all devices in 2023 alone. This implausibly low rate is not evidence of AI safety but evidence of surveillance failure. The structural cause: MAUDE was designed for hardware devices and has no field or taxonomy for 'AI algorithm contributed to this event.' Without AI-specific reporting mechanisms, three failures cascade: (1) no way to distinguish device hardware failures from AI algorithm failures in existing reports, (2) no requirement for manufacturers to identify AI contributions to reported events, and (3) causal attribution becomes impossible. The companion Handley et al. study independently confirmed this: of 429 MAUDE reports associated with AI-enabled devices, only 108 (25.2%) were potentially AI/ML related, with 148 (34.5%) containing insufficient information to determine AI contribution. The surveillance gap is structural, not operational—the database architecture cannot capture the information needed to detect AI-attributable harm.