A systematic literature review on untargeted model poisoning attacks and defense mechanisms in federated learning

Main Article Content

Tabassum Anika

Abstract

In the past few years, Federated Learning has offered an optimistic solution to the privacy concerns of users who use different Machine Learning Models. But there are risks of exploiting the models by inside and outside adversaries. To preserve the data privacy and the model integrity, the Federated Learning model needs to be protected against the attackers. For this, the untargeted model poisoning attack where the model quality is compromised, needs to be detected early. This study focuses on finding various attack, detection and defense mechanisms against untargeted model poisoning attacks. Total 245 studies were found after searching Google Scholar, ScienceDirect and Scopus. After passing the selection criteria, only 15 studies were included in this systematic literature review. We have highlighted the attacks and defense mechanisms found in the related studies. Additionally, further study avenues in the area were recommended.

Metrics

Metrics Loading ...

Article Details

How to Cite
Anika, T. (2023). A systematic literature review on untargeted model poisoning attacks and defense mechanisms in federated learning. Systematic Literature Review and Meta-Analysis Journal, 3(4), 117–126. https://doi.org/10.54480/slr-m.v3i4.42
Section
Articles

References

Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., . . . Zheng, X. (2016, May 27). TensorFlow: A system for large-scale machine learning. arXiv.org. https://arxiv.org/abs/1605.08695v2

Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., & Shmatikov, V. (2020, June). How to backdoor federated learning. In International Conference on Artificial Intelligence and Statistics (pp. 2938-2948). PMLR.

Blanchard, P., El Mhamdi, E. M., Guerraoui, R., & Stainer, J. (2017). Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems, 30.

Cao, D., Chang, S., Lin, Z., Liu, G., & Sun, D. (2019, December). Understanding distributed poisoning attack in federated learning. In 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS) (pp. 233-239). IEEE. DOI: https://doi.org/10.1109/ICPADS47876.2019.00042

Cao, X., & Gong, N. Z. (2022). Mpaf: Model poisoning attacks to federated learning based on fake clients. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3396-3404). DOI: https://doi.org/10.1109/CVPRW56347.2022.00383

Chen, Y., Su, L., & Xu, J. (2017). Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 1(2), 1-25. DOI: https://doi.org/10.1145/3154503

Chen, Z., Tian, P., Liao, W., & Yu, W. (2021). Towards multi-party targeted model poisoning attacks against federated learning systems. High-Confidence Computing, 1(1), 100002. DOI: https://doi.org/10.1016/j.hcc.2021.100002

Hossain, M. T., Islam, S., Badsha, S., & Shen, H. (2021, December). Desmp: Differential privacy-exploited stealthy model poisoning attacks in federated learning. In 2021 17th International Conference on Mobility, Sensing and Networking (MSN) (pp. 167-174). IEEE. DOI: https://doi.org/10.1109/MSN53354.2021.00038

Konečný, J., McMahan, H. B., Yu, F. X., Richtárik, P., Suresh, A. T., & Bacon, D. (2016). Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492.

Malecki, N., Paik, H. Y., Ignjatovic, A., Blair, A., & Bertino, E. (2021). Simeon-Secure Federated Machine Learning Through Iterative Filtering. arXiv preprint arXiv:2103.07704.

Mallah, R. A., Lopez, D., Marfo, G. B., & Farooq, B. (2021). Untargeted poisoning attack detection in federated learning via behavior attestation. arXiv preprint arXiv:2101.10904.

Mhamdi, E. M. E., Guerraoui, R., & Rouault, S. (2018). The hidden vulnerability of distributed learning in byzantium. arXiv preprint arXiv:1802.07927.

Pan, X., Zhang, M., Wu, D., Xiao, Q., Ji, S., & Yang, M. (2020, August). Justinian's gaavernor: Robust distributed learning with gradient aggregation agent. In Proceedings of the 29th USENIX Conference on Security Symposium (pp. 1641-1658).

Panda, A., Mahloujifar, S., Bhagoji, A. N., Chakraborty, S., & Mittal, P. (2022, May). Sparsefed: Mitigating model poisoning attacks in federated learning with sparsification. In International Conference on Artificial Intelligence and Statistics (pp. 7587-7624). PMLR.

Rodríguez-Barroso, N., Martínez-Cámara, E., Luzón, M. V., & Herrera, F. (2022). Dynamic defense against byzantine poisoning attacks in federated learning. Future Generation Computer Systems, 133, 1-9. DOI: https://doi.org/10.1016/j.future.2022.03.003

Sharma, A., Chen, W., Zhao, J., Qiu, Q., Chaterji, S., & Bagchi, S. (2021). Tesseract: Gradient flip score to secure federated learning against model poisoning attacks. arXiv preprint arXiv:2110.10108.

Shejwalkar, V., & Houmansadr, A. (2021, January). Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In NDSS. DOI: https://doi.org/10.14722/ndss.2021.24498

Shejwalkar, V., Houmansadr, A., Kairouz, P., & Ramage, D. (2022, May). Back to the drawing board: A critical evaluation of poisoning attacks on production federated learning. In 2022 IEEE Symposium on Security and Privacy (SP) (pp. 1354-1371). IEEE. DOI: https://doi.org/10.1109/SP46214.2022.9833647

Xie, C., Chen, M., Chen, P. Y., & Li, B. (2021, July). Crfl: Certifiably robust federated learning against backdoor attacks. In International Conference on Machine Learning (pp. 11372-11382). PMLR.

Xie, C., Huang, K., Chen, P. Y., & Li, B. (2019, September 25). DBA: Distributed Backdoor Attacks against Federated Learning. OpenReview. https://openreview.net/forum?id=rkgyS0VFvr

Zhang, Z., Zhang, Y., Guo, D., Yao, L., & Li, Z. (2022, September). SecFedNIDS: Robust defense for poisoning attack against federated learning-based network intrusion detection system. Future Generation Computer Systems, 134, 154–169. https://doi.org/10.1016/j.future.2022.04.010 DOI: https://doi.org/10.1016/j.future.2022.04.010

Zhao, Y., Chen, J., Zhang, J., Wu, D., Teng, J., & Yu, S. (2020). PDGAN: A Novel Poisoning Defense Method in Federated Learning Using Generative Adversarial Network. Algorithms and Architectures for Parallel Processing, 595–609. https://doi.org/10.1007/978-3-030-38991-8_39 DOI: https://doi.org/10.1007/978-3-030-38991-8_39