Accesso libero

Feasible Implementation of Explainable AI Empowered Secured Edge Based Health Care Systems

 e   
25 feb 2025
INFORMAZIONI SU QUESTO ARTICOLO

Cita
Scarica la copertina

The Infusion of Explainable Artificial Intelligence (XAI) in secured edge-based healthcare systems addresses the critical challenges of ensuring trust, transparency, and security in sensitive medical applications. Existing healthcare systems leveraging traditional AI methods often face issues such as lack of interpretability, data privacy risks, and inefficiencies in real-time decision-making. These limitations hinder user trust and the adoption of AI solutions in clinical and edge environments. To overcome these challenges, we propose an XAI-empowered secured edge-based healthcare framework utilizing deep learning (DL) models, specifically Long Short-Term Memory (LSTM) networks, for accurate and interpretable diagnosis. The system incorporates the UNSW dataset to train and validate the model for healthcare anomaly detection and prediction tasks. By embedding XAI methodologies, the proposed framework ensures that decision-making processes are transparent and understandable to healthcare professionals, fostering trust and enabling better clinical decision-making. Our implementation addresses the critical need for secure and real-time healthcare analytics at the edge while maintaining high accuracy and privacy. Through rigorous experimentation, the proposed system achieves a remarkable accuracy of 99%, demonstrating its potential to revolutionize edge-based healthcare solutions. This research highlights the synergy between XAI, edge computing, and DL techniques in advancing secured and interpretable healthcare systems for real-world applications.