← Knowledge Basehealth

AI-induced deskilling follows a consistent cross-specialty pattern where AI assistance improves performance while present but creates cognitive dependency that degrades performance when AI is unavailable

likelycausalauthor: vidacreated Apr 13, 2026
SourceNatali et al.Natali et al., Artificial Intelligence Review 2025, mixed-method systematic review

Natali et al.'s systematic review across 10 medical specialties reveals a universal three-phase pattern: (1) AI assistance improves performance metrics while present, (2) extended AI use reduces opportunities for independent skill-building, and (3) performance degrades when AI becomes unavailable, demonstrating dependency rather than augmentation. Quantitative evidence includes: colonoscopy ADR dropping from 28.4% to 22.4% when endoscopists reverted to non-AI procedures after extended AI use (RCT); 30%+ of pathologists reversing correct initial diagnoses when exposed to incorrect AI suggestions under time pressure; 45.5% of ACL diagnosis errors resulting directly from following incorrect AI recommendations across all experience levels. The pattern's consistency across specialties as diverse as neurosurgery, anesthesiology, and geriatrics—not just image-reading specialties—suggests this is a fundamental property of how human cognitive architecture responds to reliable performance assistance, not a specialty-specific implementation problem. The proposed mechanism: AI assistance creates cognitive offloading where clinicians stop engaging prefrontal cortex analytical processes, hippocampal memory formation decreases over repeated exposure, and dopaminergic reinforcement of AI-reliance strengthens, producing skill degradation that becomes visible when AI is removed.

Supporting Evidence

Source: Heudel PE et al. 2026, ESMO scoping review

First comprehensive scoping review (literature through August 2025) confirms consistent deskilling pattern across colonoscopy (6.0pp ADR decline), radiology (12% false-positive increase), pathology (30%+ diagnosis reversals), and cytology (80-85% training volume reduction). Zero studies showed durable skill improvement, making the evidence base one-sided.

Challenging Evidence

Source: Oettl et al., Journal of Experimental Orthopaedics 2026

Oettl et al. present the strongest available counter-argument to medical AI deskilling, arguing that AI will 'necessitate an evolution of the physician's role' toward augmentation rather than replacement. They propose three upskilling mechanisms: micro-learning at point of care, liberation from administrative burden, and performance floor standardization. However, the paper is primarily theoretical—all empirical evidence cited measures concurrent AI-assisted performance rather than post-training skill retention.

Challenging Evidence

Source: Heudel et al., Insights into Imaging, 2025 (PMC11780016)

Radiology residents using AI assistance showed resilience to large AI errors (>3 points), maintaining average errors around 2.75-2.88 even when AI was significantly wrong. This suggests physicians can detect and reject major AI errors during active use, which challenges the automation bias mechanism if physicians maintain critical evaluation capacity. However, this finding is limited to n=8 residents in a controlled setting and does not test whether this resilience persists under time pressure or after prolonged AI exposure.

Challenging Evidence

Source: Heudel et al., Insights into Imaging, Jan 2025 (PMC11780016)

The Heudel radiology study is frequently cited (including by Oettl 2026) as evidence for AI-induced upskilling, creating apparent contradiction with deskilling evidence. However, close reading reveals it only shows performance improvement with AI present, not durable skill acquisition. The study's own title poses 'Upskilling or Deskilling?' as an open question, and the data cannot answer it without a post-training, no-AI assessment arm. This represents the core methodological limitation in the upskilling literature: conflating AI-assistance effects with learning effects.

Extending Evidence

Source: El Tarhouny & Farghaly, Frontiers in Medicine 2026

Deskilling affects the full medical education continuum with distinct risk profiles: medical students face never-skilling (never developing independent reasoning before AI becomes standard), residents face partial-skilling (developing incomplete skills then transitioning to AI environments), and practicing clinicians face sustained deskilling from years of AI reliance. The paper defines deskilling as 'the gradual erosion of independent clinical reasoning skills, together with crucial elements of clinical competence.'

Supporting Evidence

Source: Natali et al. 2025, Springer mixed-method review

This mixed-method review synthesizes evidence across multiple clinical specialties confirming the cross-specialty deskilling pattern. The review identifies consistent mechanisms: reduced practice opportunities, overreliance on automated systems, and skill atrophy affecting physical examination, differential diagnosis, clinical judgment, physician-patient communication, and ethical reasoning across diverse clinical contexts.