Untargeted white-box adversarial attack to break into deep leaning based COVID-19 monitoring face mask detection system.

Publication date: May 05, 2023

The face mask detection system has been a valuable tool to combat COVID-19 by preventing its rapid transmission. This article demonstrated that the present deep learning-based face mask detection systems are vulnerable to adversarial attacks. We proposed a framework for a robust face mask detection system that is resistant to adversarial attacks. We first developed a face mask detection system by fine-tuning the MobileNetv2 model and training it on the custom-built dataset. The model performed exceptionally well, achieving 95. 83% of accuracy on test data. Then, the model’s performance is assessed using adversarial images calculated by the fast gradient sign method (FGSM). The FGSM attack reduced the model’s classification accuracy from 95. 83% to 14. 53%, indicating that the adversarial attack on the proposed model severely damaged its performance. Finally, we illustrated that the proposed robust framework enhanced the model’s resistance to adversarial attacks. Although there was a notable drop in the accuracy of the robust model on unseen clean data from 95. 83% to 92. 79%, the model performed exceptionally well, improving the accuracy from 14. 53% to 92% on adversarial data. We expect our research to heighten awareness of adversarial attacks on COVID-19 monitoring systems and inspire others to protect healthcare systems from similar attacks.

Concepts Keywords
Attack Adversarial attacks
Covid Adversarial example
Healthcare COVID-19
Mobilenetv2 Deep learning
Valuable Face mask recognition


Type Source Name
disease MESH COVID-19
drug DRUGBANK Flunarizine

Original Article

(Visited 1 times, 1 visits today)

Leave a Comment

Your email address will not be published. Required fields are marked *