PriMera Scientific Engineering (ISSN: 2834-2550)

Research Article

Volume 3 Issue 6

Step-wise Model Aggregation for Securing Federated Learning

Shahenda Magdy*, Mahmoud Bahaa and Alia ElBolock

November 23, 2023

Abstract

Federated learning (FL) is a distributed machine learning technique that enables remote devices to share their local models without sharing their data. While this system benefits security, it still has many vulnerabilities. In this work, we propose a new aggregation system that mitigates some of these vulnerabilities. Our aggregation framework is based on: Connecting with each client individually, calculating clients’ model changes that will affect the global model, and finally preventing aggregation of any client model until the accepted range of distances with other clients is calculated and the range of this client is within it. This approach aims to mitigate against Causative, Byzantine, and Membership Inference attacks. It has achieved an accuracy of over 90 percent for detecting malicious agents and removing them.

Keywords: Federated Learning; Security; Step - wise Model Aggregation

References

  1. Carlo Brunetta., et al. “Non-interactive, secure verifiable aggregation for decentralized, privacy preserving learning”. In Australasian Conference on Information Security and Privacy, Springer (2021): 510-528.
  2. Haokun Fang and Quan Qian. “Privacy preserving machine learning with homomorphic encryption and federated learning”. Future Internet 13.4 (2021): 94.
  3. Jiale Guo., et al. “Secure weighted aggregation for federated learning”. arXiv preprint arXiv:2010.08730 (2020).
  4. Swanand Kadhe., et al. “Fastsecagg: Scalable secure aggregation for privacy-preserving federated learning”. arXiv preprint arXiv:2009.11248 (2020).
  5. Renuga Kanagavelu., et al. “Two-phase multi-party computation enabled privacy-preserving federated learning”. In 2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), IEEE (2020): 410-419.
  6. Zaixi Zhang., et al. “Fldetector: Defending federated learning against model poisoning attacks via detecting malicious clients”. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (2022): 2545-2555.