Exploring the clinical value of concept-based AI explanations in gastrointestinal disease detection.

Publication Title

Sci Rep

Document Type

Article

Publication Date

8-7-2025

Keywords

Datasets as Topic; Gastrointestinal Diseases; Gastroenterology; Endoscopy, Gastrointestinal; Gastrointestinal Tract; Image Interpretation, Computer-Assisted; Humans; Deep Learning; Concept explanations; Deep learning; Explainable artificial intelligence; Gastroenterology.; washington; seattle; swedish; artificial intelligence

Abstract

Complex artificial intelligence models, like deep neural networks, have shown exceptional capabilities to detect early-stage polyps and tumors in the gastrointestinal tract. These technologies are already beginning to assist gastroenterologists in the endoscopy suite. To understand how these complex models work and their limitations, model explanations can be useful. Moreover, medical doctors specialized in gastroenterology can provide valuable feedback on the model explanations. This study explores three different explainable artificial intelligence methods for explaining a deep neural network detecting gastrointestinal abnormalities. The model explanations are presented to gastroenterologists. Furthermore, the clinical applicability of the explanation methods from the healthcare personnel's perspective is discussed. Our findings indicate that the explanation methods are not meeting the requirements for clinical use, but that they can provide valuable information to researchers and model developers. Higher quality datasets and careful considerations regarding how the explanations are presented might lead to solutions that are more welcome in the clinic.

Area of Special Interest

Digestive Health

Specialty/Research Institute

Gastroenterology

DOI

10.1038/s41598-025-14408-y

Share

COinS