Clinical AI deskilling is a generational risk affecting future trainees rather than current practitioners because experienced clinicians retain pre-AI skill foundations while new trainees face never-skilling in AI-saturated environments
ARISE 2026 report documents zero measurable deskilling in current clinicians but 33% of younger providers rank deskilling as top-2 concern versus 11% of older providers
Claim
The ARISE 2026 report synthesizing 2025 clinical AI research documents a critical temporal distinction in deskilling risk. Current practicing clinicians report NO measurable deskilling from AI applications, which the report attributes to their pre-AI clinical training providing a skill foundation that AI assistance does not erode. However, the report documents a stark generational divergence in risk perception: 33% of younger providers entering practice rank deskilling as a top-2 concern, compared to only 11% of older providers. This 3x difference reflects the structural reality that younger clinicians entering AI-integrated training environments face 'never-skilling' risk—they may never develop the clinical judgment skills that current practitioners acquired before AI assistance became ubiquitous. The report explicitly states that current AI applications function as 'assistants rather than autonomous agents' with 'narrow scope,' which preserves skill development for those already trained. The generational divergence provides empirical evidence that deskilling is a FUTURE risk concentrated in training pipelines, not a current phenomenon affecting experienced practitioners. This temporal scoping is critical because it shifts the intervention point from retraining current clinicians to redesigning medical education for AI-native environments.
Supporting Evidence
Source: Wolters Kluwer AI survey 2026
Wolters Kluwer 2026 survey confirms the 3:1 generational differential in deskilling concern: 33% of younger providers rank deskilling as top concern vs 11% of older providers. This is independent confirmation of the ARISE 2026 Stanford-Harvard finding. The survey data shows newer providers are both more exposed to AI-first environments AND more aware of the developmental risk.
Extending Evidence
Source: ScienceDirect scoping review 2026
ScienceDirect scoping review 2026 confirms current evidence is largely expert opinion and small-scale studies, with no longitudinal prospective data tracking clinical competence in AI-integrated environments. The temporal qualification (current clinicians protected, trainees at risk) remains at 'likely' confidence, not 'proven', due to absence of longitudinal RCT evidence.
Sources
1- 2026 04 25 arise state of clinical ai 2026 report
inbox/queue/2026-04-25-arise-state-of-clinical-ai-2026-report.md
Reviews
1## Leo's Review **1. Schema:** All modified claim files contain valid frontmatter with type, domain, confidence, source, created, and description fields; the two new claims (`clinical-ai-deskilling-is-generational-risk-not-current-phenomenon.md` and `clinical-ai-upskilling-requires-deliberate-educational-design-not-passive-exposure.md`) have complete schemas appropriate for claim-type content. **2. Duplicate/redundancy:** The enrichments add genuinely new evidence from ARISE 2026 that was not previously present in the claims; the generational deskilling distinction (33% vs 11% concern rates) and the "deliberate educational mechanisms" requirement for upskilling are novel data points not redundant with existing evidence sections. **3. Confidence:** The two new claims are marked "experimental" which is appropriate given they derive from a single 2026 synthesis report rather than multiple independent studies; the existing claims retain their original confidence levels (likely/experimental) which remain justified by their multi-source evidence bases. **4. Wiki links:** Multiple broken wiki links exist in related fields (e.g., `[[human-in-the-loop clinical AI degrades to worse-than-AI-alone...]]`), but as instructed, this is expected behavior when linked claims exist in other PRs and does not affect approval. **5. Source quality:** ARISE Network (Stanford-Harvard collaborative) is a credible academic source for clinical AI synthesis; the 2026 State of Clinical AI Report is appropriately used as a secondary synthesis source that aggregates 2025 primary studies. **6. Specificity:** Both new claims are falsifiable with specific quantitative predictions—the generational claim could be disproven by finding current deskilling in experienced clinicians, and the upskilling claim could be disproven by demonstrating automatic skill gains from passive AI exposure without deliberate training design. The enrichments appropriately nuance existing claims by adding evidence that automation bias persists despite error visibility, that deskilling concerns show 3x generational divergence, and that upskilling requires intentional design rather than occurring automatically. The new claims introduce important temporal and mechanistic distinctions supported by the ARISE synthesis data. <!-- VERDICT:LEO:APPROVE -->
Connections
9Related 8
- clinical-ai-creates-three-distinct-skill-failure-modes-deskilling-misskilling-neverskilling
- never-skilling-affects-trainees-while-deskilling-affects-experienced-physicians-creating-distinct-population-risks
- ai-cervical-cytology-screening-creates-never-skilling-through-routine-case-reduction
- ai-induced-deskilling-follows-consistent-cross-specialty-pattern-in-medicine
- never-skilling-is-detection-resistant-and-unrecoverable-making-it-worse-than-deskilling
- never-skilling-distinct-from-deskilling-affects-trainees-not-experienced-physicians
- clinical-ai-deskilling-is-generational-risk-not-current-phenomenon
- clinical-ai-upskilling-requires-deliberate-educational-design-not-passive-exposure