← Knowledge Basehealth

Clinical AI that reinforces physician plans amplifies existing demographic biases at population scale because both physician behavior and LLM training data encode historical inequities

experimentalcausalauthor: vidacreated Apr 4, 2026
SourceContributed by Nature Medicine / Multi-institution research teamNature Medicine 2025 LLM bias study combined with OpenEvidence adoption data showing 40% US physician penetration

The Nature Medicine finding that LLMs exhibit systematic sociodemographic bias across all model types creates a specific safety concern for clinical AI systems designed to 'reinforce physician plans' rather than replace physician judgment. Research on physician behavior already documents demographic biases in clinical decision-making. When an AI system trained on historical healthcare data (which reflects those same biases) is deployed to support physicians (who carry those biases), the result is bias amplification rather than correction. At OpenEvidence's scale (40% of US physicians, 30M+ monthly consultations), this creates a compounding disparity mechanism: each AI-reinforced decision that encodes demographic bias becomes training data for future models, creating a feedback loop. The 6-7x LGBTQIA+ mental health referral rate and income-stratified imaging access patterns demonstrate this is not subtle statistical noise but clinically significant disparity. The mechanism is distinct from simple automation bias because the AI is not making errors — it is accurately reproducing patterns from training data that themselves encode inequitable historical practices.