À propos de cet article

Citez

This research focuses on the development of an explainable artificial intelligence (Explainable AI or XAI) system aimed at the analysis of medical data. Medical imaging and related datasets present inherent complexities due to their high-dimensional nature and the intricate biological patterns they represent. These complexities necessitate sophisticated computational models to decode and interpret, often leading to the employment of deep neural networks. However, while these models have achieved remarkable accuracy, their ”black-box” nature raises legitimate concerns regarding their interpretability and reliability in the clinical context.

To address this challenge, we can consider the following approaches: traditional statistical methods, a singular complex neural network, or an ensemble of simpler neural networks. Traditional statistical methods, though transparent, often lack the nuanced sensitivity required for the intricate patterns within medical images. On the other hand, a singular complex neural network, while powerful, can sometimes be too generalized, making specific interpretations challenging. Hence, our proposed strategy employs a hybrid system, combining multiple neural networks with distinct architectures, each tailored to address specific facets of the medical data interpretation challenges.

The key components of this proposed technology include a module for anomaly detection within medical images, a module for categorizing detected anomalies into specific medical conditions and a module for generating user-friendly, clinically-relevant interpretations.

eISSN:
2449-6499
Langue:
Anglais
Périodicité:
4 fois par an
Sujets de la revue:
Computer Sciences, Databases and Data Mining, Artificial Intelligence