Impactful Workplace and Learning Experiences of AI

Assessing AI-enabled processes for meaningful outcomes & professional growth.

Healthy Scepticism

Healthy scepticism of AI refers to the practice of critically engaging with AI outputs rather than accepting them at face value, an approach that is especially vital in healthcare. Research shows that 92% of general users do not verify AI-generated answers for accuracy, underscoring the risk of over-reliance without validation (Stillman et al., 2023).

In clinical contexts, however, this scepticism can be protective: studies note that clinicians often “negotiate” with AI recommendations, weighing them against professional judgment and established evidence, which helps mitigate errors (Sivaraman et al., 2023).

The concept of “healthy distrust” has even been proposed as a positive stance, recognizing that caution and verification are justified when AI is deployed in high-stakes settings (Paaßen et al., 2025).

Moreover, evidence suggests that low-veracity AI explanations may be more harmful than no explanations at all, as they erode user performance and decision-making (Nourani et al., 2020). By cultivating habits of verification, embedding explainability, and supporting professional training, healthcare systems can foster a culture where scepticism becomes a constructive tool to enhance both trust and safety in AI adoption.

Sources:

  • Stillman, J. (2023). Are You Too Trusting of AI Answers? 92 Percent of People Don’t Check It for Accuracy. Inc.com.
  • Sivaraman, V., Bukowski, L. A., Levin, J., Kahn, J. M., & Perer, A. (2023). Ignore, trust, or negotiate: understanding clinician acceptance of AI-based treatment recommendations in health care. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-18).
  • Paaßen, B., Alpsancar, S., Matzner, T., & Scharlau, I. (2025). Healthy Distrust in AI systems. arXiv preprint arXiv:2505.09747.
  • Nourani, M., Roy, C., Rahman, T., Ragan, E. D., Ruozzi, N., & Gogate, V. (2020). Don’t explain without verifying veracity: an evaluation of explainable ai with video activity recognition. arXiv preprint arXiv:2005.02335.