A Zero‑Sum Game‑Theoretic Analysis for Cost‑Aware Backdoor Attacks and Defenses in Deep Learning

Journal of Intelligent Communication

Article

A Zero‑Sum Game‑Theoretic Analysis for Cost‑Aware Backdoor Attacks and Defenses in Deep Learning

Kallas, K., Tannous, C., & Faraoun, H. (2025). A Zero‑Sum Game‑Theoretic Analysis for Cost‑Aware Backdoor Attacks and Defenses in Deep Learning. Journal of Intelligent Communication, 4(2), 74–92. https://doi.org/10.54963/jic.v4i2.1576

Authors

  • Kassem Kallas

    IMT Atlantique, Inserm UMR 1101, 29200 Brest, France
    National Institute of Health and Medical Research, Inserm UMR 1101, 29200 Brest, France
  • Carine Tannous

    IMT Atlantique, Inserm UMR 1101, 29200 Brest, France
  • Hichem Faraoun

    IMT Atlantique, Inserm UMR 1101, 29200 Brest, France

Received: 2 July 2025; Revised: 12 August 2025; Accepted: 18 August 2025; Published: 4 September 2025

Backdoor attacks pose a critical and increasingly realistic security threat to deep neural networks (DNNs), enabling adversaries to implant hidden behaviors that remain dormant under normal conditions while preserving high performance on benign data. Although numerous defenses have been proposed, most works treat the interaction between attackers and defenders in isolation, without a principled mechanism to analyze their strategic interplay under realistic resource constraints. This paper introduces BGCost, a zero‑sum game‑theoretic framework that formalizes backdoor attack–defense dynamics with explicit cost‑aware utility functions. The attacker seeks to maximize Attack Success Rate (ASR) while maintaining Clean Data Accuracy (CDA) above an acceptance threshold to remain stealthy, whereas the defender aims to limit ASR and preserve CDA while minimizing the computational and accuracy costs induced by mitigation. By embedding resource consumption directly into the utilities of both players, BGCost provides a structured benchmark to study equilibrium strategies across unconstrained, balanced, and high‑cost operational regimes. Through numerical simulations, we show that cost‑aware game modeling fundamentally alters equilibrium behavior: unconstrained settings drive extreme strategies, costly defenses weaken robustness, costly attacks suppress adversarial impact, and balanced configurations yield deployment‑friendly equilibria with low ASR and high CDA. Rather than proposing a new algorithmic defense, BGCost serves as a decision‑theoretic tool that complements existing mechanisms by revealing how cost constraints shape optimal attacker–defender behavior in practice, guiding the design of realistic and resource‑efficient protections against backdoor threats.

Keywords:

Adversarial Machine Learning Backdoor Attacks Backdoor Defenses Game Theory Deep Neural Networks AI Security Attack‑Defense Strategies

References

  1. Voulodimos, A.; Doulamis, N.; Doulamis, A.; et al. Deep Learning for Computer Vision: A Brief Review. Comput. Intell. Neurosci. 2018, 2018, 1–13.
  2. Grigorescu, S.; Trasnea, B.; Cocias, T.; et al. A Survey of Deep Learning Techniques for Autonomous Driving. J. Field Robot. 2020, 37, 362–386.
  3. Heaton, J.B.; Polson, N.G.; Witte, J.H. Deep Learning for Finance: Deep Portfolios. Appl. Stoch. Models Bus. Ind. 2017, 33, 3–12.
  4. Miotto, R.; Wang, F.; Wang, S.; et al. Deep Learning for Healthcare: Review, Opportunities and Challenges. Brief. Bioinform. 2018, 19, 1236–1246.
  5. Shinde, P.P.; Shah, S. A Review of Machine Learning and Deep Learning Applications. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune, India, 16–18 August 2018; pp. 1–6.
  6. Gürel, N.M.; Qi, X.; Rimanic, L. et al. Knowledge Enhanced Machine Learning Pipeline Against Diverse Adversarial Attacks. Proc. Mach. Learn. Res. 2021, 139, 3976–3987.
  7. Le Roux, Q.; Bourbao, E.; Teglia, Y.; et al. A Comprehensive Survey on Backdoor Attacks and Their Defenses in Face Recognition Systems. IEEE Access 2024, 12, 47433–47468. DOI: https://doi.org/10.1109/ACCESS.2024.3382584
  8. Wu, B.; Zhu, Z.; Liu, L.; et al. Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-Cycle Perspective. arXiv Preprint 2023, arXiv:2302.09457.
  9. Al-Jarrah, O.Y.; Yoo, P.D.; Muhaidat, S.; et al. Efficient Machine Learning for Big Data: A Review. Big Data Res. 2015, 2, 87–93.
  10. Gao, Y.; Doan, B.G.; Zhang, Z.; et al. Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review. arXiv Preprint 2020, arXiv:2007.10760.
  11. Li, Y.; Jiang, Y.; Li, Z.; et al. Backdoor Learning: A Survey. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 5–22.
  12. Chen, X.; Liu, C.; Li, B.; et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv Preprint 2017, arXiv:1712.05526.
  13. Schwarzschild, A.; Goldblum, M.; Gupta, A.; et al. Just How Toxic Is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks. In Proceedings of the 38th International Conference on Machine Learning, Online, 18–24 July 2021.
  14. Zheng, T.; Lan, H.; Li, B. Be Careful with PyPI Packages: You May Unconsciously Spread Backdoor Model Weights. In Proceedings of the 6th MLSys Conference, Miami, FL, USA, 4–8 June 2023.
  15. Kurita, K.; Michel, P.; Neubig, G. Weight Poisoning Attacks on Pre-Trained Models. arXiv Preprint 2020, arXiv:2004.06660.
  16. Barni, M.; Kallas, K.; Tondi, B. A New Backdoor Attack in CNNs by Training Set Corruption Without Label Poisoning. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019.
  17. Gu, T.; Liu, K.; Dolan-Gavitt, B.; et al. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access 2019, 7, 47230–47244.
  18. Turner, A.; Tsipras, D.; Madry, A. Label-Consistent Backdoor Attacks. arXiv Preprint 2019, arXiv:1912.02771.
  19. Wu, B.; Wei, S.; Zhu, M.; et al. Defenses in Adversarial Machine Learning: A Survey. arXiv Preprint 2023, arXiv:2312.08890.
  20. Sheng, X.; Han, Z.; Li, P.; et al. A Survey on Backdoor Attack and Defense in Natural Language Processing. arXiv Preprint 2022, arXiv:2211.11958.
  21. Kong, Y.; Zhang, J. Adversarial Audio: A New Information Hiding Method and Backdoor for DNN-Based Speech Recognition Models. arXiv Preprint 2019, arXiv:1904.03829.
  22. Bhalerao, A.; Kallas, K.; Tondi, B.; et al. Luminance-Based Video Backdoor Attack Against Anti-Spoofing Rebroadcast Detection. In Proceedings of the 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP), Kuala Lumpur, Malaysia, 27–29 September 2019; pp. 1–6.
  23. Li, Y.; Li, Y.; Wu, B.; et al. Invisible Backdoor Attack with Sample-Specific Triggers. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 16463–16472.
  24. Wang, B.; Cao, X.; Jia, J.; et al. On Certifying Robustness Against Backdoor Attacks via Randomized Smoothing. arXiv Preprint 2020, arXiv:2002.11750.
  25. Jia, J.; Yuan, Z.; Sahabandu, D.; et al. FLGAME: A Game-theoretic Defense against Backdoor Attacks in Federated Learning. In Proceedings of the 37th Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, LA, USA, 10–16 December 2023. Available online: https://openreview.net/forum?id=TwCGI3rVddj
  26. Kallas, K.; Le Roux, Q.; Hamidouche, W.; et al. Strategic Safeguarding: A Game Theoretic Approach for Analyzing Attacker–Defender Behavior in DNN Backdoors. EURASIP J. Inf. Secur. 2024, 2024, 32.
  27. von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 2007.
  28. Burguillo, J.C. Game Theory. In Self-Organizing Coalitions for Managing Complexity; Springer: Cham, Switzerland, 2018; 29, pp. 101–135. DOI: https://doi.org/10.1007/978-3-319-69898-4_7
  29. Nash, J. Equilibrium Points in N-Person Games. Proc. Natl. Acad. Sci. USA 1950, 36, 48–49.
  30. Osborne, M.J.; Rubinstein, A. A Course in Game Theory; MIT Press: Cambridge, MA, USA, 1994.

Copyright © UK Scientific Publishing Limited.