|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 52 |
| Published: November 2025 |
| Authors: Rishika Singh, Swati Joshi |
10.5120/ijca2025925888
|
Rishika Singh, Swati Joshi . Explainable Federated Learning Taxonomy, Evaluation Frameworks, and Emerging Challenges. International Journal of Computer Applications. 187, 52 (November 2025), 52-58. DOI=10.5120/ijca2025925888
@article{ 10.5120/ijca2025925888,
author = { Rishika Singh,Swati Joshi },
title = { Explainable Federated Learning Taxonomy, Evaluation Frameworks, and Emerging Challenges },
journal = { International Journal of Computer Applications },
year = { 2025 },
volume = { 187 },
number = { 52 },
pages = { 52-58 },
doi = { 10.5120/ijca2025925888 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2025
%A Rishika Singh
%A Swati Joshi
%T Explainable Federated Learning Taxonomy, Evaluation Frameworks, and Emerging Challenges%T
%J International Journal of Computer Applications
%V 187
%N 52
%P 52-58
%R 10.5120/ijca2025925888
%I Foundation of Computer Science (FCS), NY, USA
Solutions that guarantee data privacy and model transparency are required due to the quick integration of AI into delicate industries like cybersecurity, healthcare, and finance. Federated Learning (FL) is a promising paradigm that allows for cooperative model training across decentralized datasets while maintaining privacy by avoiding the sharing of raw data. Simultaneously, Explainable AI (XAI) makes otherwise opaque models interpretable, promoting stakeholder trust and assisting with regulatory compliance. Using techniques like SHAP, LIME, Grad-CAM, fuzzy logic, and rule-based systems, recent research has investigated the nexus between FL and XAI in tasks like intrusion detection, fraud detection, and medical diagnosis. Despite the impressive performance of these efforts, there are still unresolved issues with scalability, non- IID data, privacy–interpretability trade-offs, standardized evaluation metrics, and resilience to adversarial manipulation. The present state of research is compiled in this review, which also identifies important gaps, emphasizes methodological trends, and suggests future directions. These issues could be resolved by integrating FL and XAI, which could lead to reliable, private, and interpretable AI systems in high-stakes situations where security and explainability are crucial.