This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. “Communication-efficient learning of deep networks from decentralized data”, 2017.Search in Google Scholar
X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang. “On the convergence of fedavg on non-iid data”, 2020.Search in Google Scholar
T.-M. H. Hsu, H. Qi, and M. Brown. “Measuring the effects of non-identical data distribution for federated visual classification”, 2019.Search in Google Scholar
X. Ma, J. Zhu, Z. Lin, S. Chen, and Y. Qin, “A state-of-the-art survey on solving non-iid data in federated learning”, Future Generation Computer Systems, vol. 135, 2022, 244–258, https://doi.org/10.1016/j.future.2022.05.003.Search in Google Scholar
R. Gosselin, L. Vieu, F. Loukil, and A. Benoit, “Privacy and security in federated learning: A survey”, Applied Sciences, vol. 12, no. 19, 2022.Search in Google Scholar
P. Erbil and M. E. Gursoy, “Detection and mitigation of targeted data poisoning attacks in federated learning”. In: 2022 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech), 2022, 1–8, 10.1109/DASC/PiCom/CBDCom/Cy55231. 2022.9927914.Search in Google Scholar
A. Danilenka, “Mitigating the effects of non-iid data in federated learning with a self-adversarial balancing method”, 2023 18th Conference on Computer Science and Intelligence Systems (Fed- CSIS), 2023, 925–930.Search in Google Scholar
G. Cohen, S. Afshar, J. Tapson, and A. van Schaik. “Emnist: an extension of mnist to handwritten letters”, 2017.Search in Google Scholar
A. Krizhevsky. “Learning multiple layers of features from tiny images”, 2009.Search in Google Scholar
Z. Zhang, X. Cao, J. Jia, and N. Z. Gong, “Fldetector: Defending federated learning against model poisoning attacks via detecting malicious clients”. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 2022, 2545–2555, 10.1145/3534678.3539231.Search in Google Scholar
D. Li, W. E. Wong, W. Wang, Y. Yao, and M. Chau, “Detection and mitigation of label-flipping attacks in federated learning systems with kpca and k-means”. In: 2021 8th International Conference on Dependable Systems and Their Applications (DSA), 2021, 551–559, 10.1109/DSA52907.2021.00081.Search in Google Scholar
P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent”. In: I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds., Advances in Neural Information Processing Systems, vol. 30, 2017.Search in Google Scholar
D. Cao, S. Chang, Z. Lin, G. Liu, and D. Sun, “Understanding distributed poisoning attack in federated learning”. In: 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS), 2019, 233–239, 10.1109/ICPADS47876.2019.00042.Search in Google Scholar
X. Cao, M. Fang, J. Liu, and N. Z. Gong. “Fltrust: Byzantine-robust federated learning via trust bootstrapping”, 2022.Search in Google Scholar
D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett. “Byzantine-robust distributed learning: Towards optimal statistical rates”, 2021.Search in Google Scholar
C. Xie, O. Koyejo, and I. Gupta. “Generalized byzantine-tolerant sgd”, 2018.Search in Google Scholar
C. Fung, C. J. M. Yoon, and I. Beschastnikh, “The limitations of federated learning in sybil settings”. In: 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), San Sebastian, 2020, 301–316.Search in Google Scholar
Y. Xie, W. Zhang, R. Pi, F. Wu, Q. Chen, X. Xie, and S. Kim. “Robust federated learning against both data heterogeneity and poisoning attack via aggregation optimization”, 2022.Search in Google Scholar
S. Han, S. Park, F. Wu, S. Kim, B. Zhu, X. Xie, and M. Cha, “Towards attack-tolerant federated learning via critical parameter analysis”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, 4999–5008.Search in Google Scholar
S. Park, S. Han, F. Wu, S. Kim, B. Zhu, X. Xie, and M. Cha, “Feddefender: Client-side attack-tolerant federated learning”. In: Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, New York, NY, USA, 2023, 1850–1861, 10.1145/3580305.3599346.Search in Google Scholar
C. Chen, Y. Liu, X. Ma, and L. Lyu. “Calfat: Calibrated federated adversarial training with label skewness”, 2023.Search in Google Scholar
G. Zizzo, A. Rawat, M. Sinn, and B. Buesser. “Fat: Federated adversarial training”, 2020.Search in Google Scholar
Z. Li, J. Shao, Y. Mao, J. H. Wang, and J. Zhang. “Federated learning with gan-based data synthesis for non-iid clients”, 2022.Search in Google Scholar
Y. Lu, P. Qian, G. Huang, and H. Wang. “Personalized federated learning on long-tailed data via adversarial feature augmentation”, 2023.Search in Google Scholar
X. Li, Z. Song, and J. Yang. “Federated adversarial learning: A framework with convergence analysis”, 2022.Search in Google Scholar
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. “Intriguing properties of neural networks”, 2014.Search in Google Scholar
O. Suciu, R. Marginean, Y. Kaya, H. D. III, and T. Dumitras, “When does machine learning FAIL? generalized transferability for evasion and poisoning attacks”. In: 27th USENIX Security Symposium (USENIX Security 18), Baltimore, MD, 2018, 1299–1316.Search in Google Scholar
I. J. Goodfellow, J. Shlens, and C. Szegedy. “Explaining and harnessing adversarial examples”, 2015.Search in Google Scholar
A. Kurakin, I. Goodfellow, and S. Bengio. “Adversarial examples in the physical world”, 2017.Search in Google Scholar
Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li. “Boosting adversarial attacks with momentum”, 2018.Search in Google Scholar
G. Xia, J. Chen, C. Yu, and J. Ma, “Poisoning attacks in federated learning: A survey”, IEEE Access, vol. 11, 2023, 10708–10722, 10.1109/ACCESS.2023.3238823.Search in Google Scholar
A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, “Poison frogs! targeted clean-label poisoning attacks on neural networks”. In: Neural Information Processing Systems, 2018.Search in Google Scholar
V. Shejwalkar, A. Houmansadr, P. Kairouz, and D. Ramage, “Back to the drawing board: A critical evaluation of poisoning attacks on federated learning”, ArXiv, vol. abs/2108.10241, 2021.Search in Google Scholar
H. Xiao, H. Xiao, and C. Eckert, “Adversarial label flips attack on support vector machines”. In: Proceedings of the 20th European Conference on Artificial Intelligence, NLD, 2012, 870–875.Search in Google Scholar
V. Tolpegin, S. Truex, M. E. Gursoy, and L. Liu. “Data poisoning attacks against federated learning systems”, 2020.Search in Google Scholar
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. “Pytorch: An imperative style, high-performance deep learning library”. In: Advances in Neural Information Processing Systems 32, 8024–8035. Curran Associates, Inc., 2019.Search in Google Scholar
S. Marcel and Y. Rodriguez, “Torchvision the machine-vision package of torch”. In: Proceedings of the 18th ACM International Conference on Multimedia, New York, NY, USA, 2010, 1485–1488, 10.1145/1873951.1874254.Search in Google Scholar
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition”, Proceedings of the IEEE, vol. 86, no. 11, 1998, 2278–2324, 10.1109/5.726791.Search in Google Scholar
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. “Mobilenetv2: Inverted residuals and linear bottlenecks”, 2019.Search in Google Scholar
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei- Fei, “Imagenet: A large-scale hierarchical image database”. In: 2009 IEEE conference on computer vision and pattern recognition, 2009, 248–255.Search in Google Scholar
L. Lyu, H. Yu, X. Ma, C. Chen, L. Sun, J. Zhao, Q. Yang, and P. S. Yu, “Privacy and robustness in federated learning: Attacks and defenses”, IEEE Transactions on Neural Networks and Learning Systems, 2022, 1–21, 10.1109/TNNLS.2022.3216981.Search in Google Scholar
F. Wilcoxon. Individual comparisons by ranking methods, 196–202. Springer, 1992.Search in Google Scholar