|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 55 |
| Published: November 2025 |
| Authors: Hamuza Senyonga, Charity Mahwire, Thelma Chimusoro, Enock Katenda |
10.5120/ijca2025925973
|
Hamuza Senyonga, Charity Mahwire, Thelma Chimusoro, Enock Katenda . EVALUATING THE VULNERABILITY OF DEEP LEARNING MODELS IN MEDICAL IMAGING TO ADVERSARIAL PERTURBATIONS. International Journal of Computer Applications. 187, 55 (November 2025), 46-60. DOI=10.5120/ijca2025925973
@article{ 10.5120/ijca2025925973,
author = { Hamuza Senyonga,Charity Mahwire,Thelma Chimusoro,Enock Katenda },
title = { EVALUATING THE VULNERABILITY OF DEEP LEARNING MODELS IN MEDICAL IMAGING TO ADVERSARIAL PERTURBATIONS },
journal = { International Journal of Computer Applications },
year = { 2025 },
volume = { 187 },
number = { 55 },
pages = { 46-60 },
doi = { 10.5120/ijca2025925973 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2025
%A Hamuza Senyonga
%A Charity Mahwire
%A Thelma Chimusoro
%A Enock Katenda
%T EVALUATING THE VULNERABILITY OF DEEP LEARNING MODELS IN MEDICAL IMAGING TO ADVERSARIAL PERTURBATIONS%T
%J International Journal of Computer Applications
%V 187
%N 55
%P 46-60
%R 10.5120/ijca2025925973
%I Foundation of Computer Science (FCS), NY, USA
Deep learning has revolutionized medical imaging, but it is vulnerable to adversarial attacks, which are deemed dangerous to clinical use. This paper compares the strength of convolutional neural networks (CNNs) and Vision Transformers (ViTs) that are trained on the ChestX-ray14 dataset at NIH to detect pneumonia. Both models showed high baseline accuracy (>90 percent) even when the models were attacked by Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), DeepFool, and Carlini and Wenger, although PGD and CW were the most disruptive. The evaluation of defense strategies was also performed, which involves adversarial training, input preprocessing, ensemble modelling, and adversarial detection. Adversarial training provided the best protection, at the cost of lower clean-data accuracy and preprocessings and ensembles offered partial resistance, and also detection strategies identified a lot of naive adversarial inputs. There was however no one defence that was enough to counter every assault. The discoveries reveal the necessity of layered defence practices and ethical and regulatory issues related to trust, liability, and patient safety, which supports the significance of strong and transparent AI in the field of healthcare practices.