← Knowledge Basehealth

Clinical AI introduces three distinct skill failure modes — deskilling (existing expertise lost through disuse), mis-skilling (AI errors adopted as correct), and never-skilling (foundational competence never acquired) — requiring distinct mitigation strategies for each

experimentalcausalauthor: vidacreated Apr 11, 2026
SourceContributed by Artificial Intelligence Review (Springer Nature)Artificial Intelligence Review (Springer Nature), mixed-method systematic review

This systematic review identifies three mechanistically distinct pathways through which clinical AI degrades physician competence. Deskilling occurs when existing expertise atrophies through disuse: colonoscopy polyp detection dropped from 28.4% to 22.4% after 3 months of AI use, and experienced radiologists showed 12% increased false-positive recalls after exposure to erroneous AI prompts. Mis-skilling occurs when clinicians actively learn incorrect patterns from systematically biased AI outputs: in computational pathology studies, 30%+ of participants reversed correct initial diagnoses after exposure to incorrect AI suggestions under time constraints. Never-skilling is categorically different: trainees who begin clinical education with AI assistance may never develop foundational competencies. Junior radiologists are far less likely than senior colleagues to detect AI errors — not because they've lost skills, but because they never acquired them. This is structurally invisible because there's no pre-AI baseline to compare against. The review documents mitigation strategies including AI-off drills, structured assessment pre-AI review, and curriculum redesign with explicit competency development before AI exposure. The key insight is that these three failure modes require fundamentally different interventions: deskilling requires practice maintenance, mis-skilling requires error detection training, and never-skilling requires prospective competency assessment before AI exposure.

Extending Evidence

Source: Heudel PE et al. 2026, UK cervical screening consolidation

UK cytology lab consolidation provides first structural never-skilling mechanism: 80-85% training volume reduction through consolidation from 45 to 8 labs. This extends the never-skilling concept from individual cognitive failure to institutional infrastructure destruction. The mechanism is not 'physicians never learn because AI does it for them' but 'training infrastructure is dismantled so learning becomes impossible.'

Supporting Evidence

Source: PubMed systematic search, April 21, 2026

The complete absence of peer-reviewed evidence for durable up-skilling after 5+ years of large-scale clinical AI deployment provides negative confirmation that skill effects flow in one direction. Despite extensive evidence on AI improving performance while present, zero published studies demonstrate improvement that persists when AI is removed. This asymmetry—growing deskilling literature (Heudel et al. 2026, Natali et al. 2025, colonoscopy ADR drop, radiology/pathology automation bias) versus empty up-skilling literature—confirms the three failure modes operate without a compensating improvement mechanism.

Extending Evidence

Source: Oettl et al. 2026

Oettl et al. 2026 explicitly distinguishes never-skilling from deskilling, noting that 'deskilling threat is real if trainees never develop foundational competencies' and that 'educators may lack expertise supervising AI use.' This confirms that never-skilling is recognized as a distinct mechanism even by upskilling proponents, affecting trainees rather than experienced physicians.

Extending Evidence

Source: Oettl et al. 2026

Oettl et al. explicitly distinguish never-skilling (trainees never developing foundational competencies) from deskilling (experienced physicians losing existing skills), noting that 'educators may lack expertise supervising AI use' which compounds the never-skilling risk. This adds population-specific mechanism detail to the three-mode framework.

Supporting Evidence

Source: PMC11919318, Academic Pathology 2025

Academic Pathology Journal commentary provides pathology-specific confirmation of never-skilling mechanism, noting that AI automation of routine cervical cytology screening reduces trainee exposure to foundational cases, preventing development of 'diagnostic acumen necessary for independent practice.' The paper explicitly distinguishes this from deskilling of experienced practitioners.

Extending Evidence

Source: Heudel et al., Insights into Imaging, Jan 2025 (PMC11780016)

The Heudel study design inadvertently demonstrates why never-skilling is detection-resistant: with only 8 residents (4 first-year, 4 third-year) and no longitudinal follow-up, the study cannot distinguish between 'residents learning with AI assistance' versus 'residents becoming dependent on AI presence.' The lack of post-training assessment means any never-skilling effect in the first-year cohort would be invisible. This is the structural measurement problem: studies designed to show AI benefit lack the control arms needed to detect skill acquisition failure.

Supporting Evidence

Source: ARISE Network State of Clinical AI Report 2026

ARISE 2026 report documents zero current deskilling in practicing clinicians but 33% of younger providers rank deskilling as top-2 concern versus 11% of older providers, providing quantitative evidence for the temporal distribution of skill failure modes across career stages

Extending Evidence

Source: El Tarhouny & Farghaly, Frontiers in Medicine 2026

The continuum framing shows never-skilling affects trainees who never develop baseline competency before AI adoption, while deskilling affects experienced physicians who lose previously acquired skills. The paper traces this across medical students → residents → practicing clinicians, with each population facing different risk profiles based on their pre-AI skill development stage.

Extending Evidence

Source: Natali et al. 2025, introducing moral deskilling concept

The review adds moral deskilling as a fourth distinct failure mode: erosion of ethical sensitivity and moral judgment from routine AI acceptance. This operates through a different pathway than cognitive deskilling (diagnostic/procedural skill loss), automation bias (cognitive deference), or never-skilling (skill non-acquisition). Moral deskilling affects the capacity to recognize when AI recommendations conflict with patient values or best interests.