Skip to yearly menu bar Skip to main content


Poster

The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning

Joshua C. Zhao · Ahmed Roushdy Elkordy · Atul Sharma · Yahya H. Ezzeldin · Salman Avestimehr · Saurabh Bagchi

West Building Exhibit Halls ABC 379

Abstract:

Secure aggregation promises a heightened level of privacy in federated learning, maintaining that a server only has access to a decrypted aggregate update. Within this setting, linear layer leakage methods are the only data reconstruction attacks able to scale and achieve a high leakage rate regardless of the number of clients or batch size. This is done through increasing the size of an injected fully-connected (FC) layer. We show that this results in a resource overhead which grows larger with an increasing number of clients. We show that this resource overhead is caused by an incorrect perspective in all prior work that treats an attack on an aggregate update in the same way as an individual update with a larger batch size. Instead, by attacking the update from the perspective that aggregation is combining multiple individual updates, this allows the application of sparsity to alleviate resource overhead. We show that the use of sparsity can decrease the model size overhead by over 327× and the computation time by 3.34× compared to SOTA while maintaining equivalent total leakage rate, 77% even with 1000 clients in aggregation.

Chat is not available.