Original Source Here
Federated learning (FL) has received a great deal of research attention in the context of privacy protection restrictions. By jointly training deep learning models, a variety of training tasks can be competently performed with the help of invited participants. However, FL is concerned with a large number of attacks involving privacy and security aspects. This paper shows a federated learning workflow process and how a malicious client can exploit vulnerabilities in the FL system to attack the system. A systematic survey of existing research on the taxonomy of federated learning attack surface and the classification is presented. As with the FL attack surface, attackers compromise security, privacy, gain free incentives and abuse the Confidentiality, Integrity, and Availability (CIA) security triad. In addition, state-of-the-art defensive approaches against FL attacks are elaborated which help to protect and minimize the likelihood of attacks. FL models and tools for privacy attacks are explained, along with their best aspects and drawbacks. Finally, technical challenges and possible research guidelines are discussed as future work to build robust FL systems.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot