Explainability and artificial intelligence in medicine

In recent years, improved artificial intelligence (AI) algorithms and access to training data have led to the possibility of AI augmenting or replacing some of the current functions of physicians. However, interest from various stakeholders in the use of AI in medicine has not translated to widespread adoption.  As many experts have stated, one of the key reasons for this restricted uptake is the scarce transparency associated with specific AI algorithms, especially black-box algorithms. Clinical medicine, primarily evidence-based medical practice, relies on transparency in decision making.  If there is no medically explainable AI and the physician cannot reasonably explain the decision-making process, the patient’s trust in them will erode. To address the transparency issue with certain AI models, explainable AI has emerged.

 

Click here to continue reading: thelancet.com