Explainable Artificial Intelligence (XAI) Framework for Transparent Clinical Decision Support Systems

Main Article Content

Sateesh Kumar Rongali

Abstract

Artificial Intelligence (AI), the branch of computer science concerned with building intelligent machines, is commencing its momentous entry into Clinical Decision Support (CDS) systems used during patient diagnosis and treatment. The lack of transparency and interpretability of the underlying mathematical models threatens the adoption of AI-enabled decision support tools in medicine. However, clinical safety hinges on user trust. Hence, transparency and interpretability constitute major tenets of Explainable AI (XAI), a field devoted to generating explanations in natural language, diagrams, or other forms suited to the anticipated end-user. Explanations delineate the relationship between the system input and output and support users in their clinical reasoning. To promote XAI in the context of a transparent clinical CDS framework, core principles are distilled from literature in XAI and CDS. Three high-level requirements emerge to guide specification engineering: the capability to present appropriate information to each actor at each decision milestone during the model lifecycle; the inclusion of mechanisms that enable users to ascertain that the explained outcome aligns with the expected outcome, if a similar situation were to arise; and the assurance of explanation usefulness in the context for which the AI approach was designed and deployed.

Article Details

How to Cite
Sateesh Kumar Rongali. (2023). Explainable Artificial Intelligence (XAI) Framework for Transparent Clinical Decision Support Systems. International Journal of Medical Toxicology and Legal Medicine, 26(3 and 4), 22–31. Retrieved from http://ijmtlm.org/index.php/journal/article/view/1427
Section
Articles