Sheaf HyperNetworks for Personalized Federated Learning
Published in arXiv, 2024
Sheaf HyperNetworks is a novel HyperNetwork-based methodology to improve the performance of any personalized federated learning setting.
The work was presented at MobiUK 2024. The abstract of the presentation can be found here.
Abstract:
Graph hypernetworks (GHNs), constructed by combining graph neural networks (GNNs) with hypernetworks (HNs), leverage relational data across various domains such as neural architecture search, molecular property prediction and federated learning. Despite GNNs and HNs being individually successful, we show that GHNs present problems compromising their performance, such as over-smoothing and heterophily. Moreover, we cannot apply GHNs directly to personalized federated learning (PFL) scenarios, where a priori client relation graph may be absent, private, or inaccessible. To mitigate these limitations in the context of PFL, we propose a novel class of HNs, sheaf hypernetworks (SHNs), which combine cellular sheaf theory with HNs to improve parameter sharing for PFL. We thoroughly evaluate SHNs across diverse PFL tasks, including multi-class classification, traffic and weather forecasting. Additionally, we provide a methodology for constructing client relation graphs in scenarios where such graphs are unavailable. We show that SHNs consistently outperform existing PFL solutions in complex non-IID scenarios. While the baselines’ performance fluctuates depending on the task, SHNs show improvements of up to 2.7% in accuracy and 5.3% in lower mean squared error over the best-performing baseline.
Recommended citation: Bao Nguyen, Lorenzo Sani, Xinchi Qiu, Pietro Liò, & Nicholas D. Lane. (2024). Sheaf HyperNetworks for Personalized Federated Learning.
Recommended citation: Bao Nguyen, Lorenzo Sani, Xinchi Qiu, Pietro Liò, & Nicholas D. Lane. (2024). Sheaf HyperNetworks for Personalized Federated Learning.
Download Paper