TY - JOUR
T1 - HeaPS: Heterogeneity-Aware Participant Selection for Efficient Federated Learning
AU - Yang, Duo
AU - Hu, Bing
AU - Gao, Yunqi
AU - Jin, A-Long
AU - Liu, An
AU - Yeung, Kwan L.
AU - You, Yang
N1 - Publisher Copyright:
© 2025 Elsevier Inc.
PY - 2025/12
Y1 - 2025/12
N2 - Federated learning enables collaborative model training among numerous clients. However, existing participant/client selection methods fail to fully leverage the advantages of clients with excellent computational or communication capabilities. In this paper, we propose HeaPS, a novel Heterogeneity-aware Participant Selection framework for efficient federated learning. We introduce a finer-grained global selection algorithm to select communication-strong leaders and computation-strong members from candidate clients. The leaders are responsible for communicating with the server to reduce per-round duration, as well as contributing gradients; while the members communicate with the leaders to contribute more gradients obtained from high-utility data to the global model and improve the final model accuracy. Meanwhile, we develop a gradient migration path generation algorithm to match the optimal leader for each member. We also design the client scheduler to facilitate parallel local training of leaders and members based on gradient migration. Experimental results show that, in comparison with state-of-the-art methods, HeaPS achieves a speedup of up to 3.20× in time-to-accuracy performance and improves the final accuracy by up to 3.57%. The code for HeaPS is available at https://github.com/Dora233/HeaPS.
AB - Federated learning enables collaborative model training among numerous clients. However, existing participant/client selection methods fail to fully leverage the advantages of clients with excellent computational or communication capabilities. In this paper, we propose HeaPS, a novel Heterogeneity-aware Participant Selection framework for efficient federated learning. We introduce a finer-grained global selection algorithm to select communication-strong leaders and computation-strong members from candidate clients. The leaders are responsible for communicating with the server to reduce per-round duration, as well as contributing gradients; while the members communicate with the leaders to contribute more gradients obtained from high-utility data to the global model and improve the final model accuracy. Meanwhile, we develop a gradient migration path generation algorithm to match the optimal leader for each member. We also design the client scheduler to facilitate parallel local training of leaders and members based on gradient migration. Experimental results show that, in comparison with state-of-the-art methods, HeaPS achieves a speedup of up to 3.20× in time-to-accuracy performance and improves the final accuracy by up to 3.57%. The code for HeaPS is available at https://github.com/Dora233/HeaPS.
KW - Client utility
KW - Data heterogeneity
KW - Federated learning
KW - Participant/client selection
KW - System heterogeneity
UR - https://www.scopus.com/pages/publications/105013859133
U2 - 10.1016/j.jpdc.2025.105168
DO - 10.1016/j.jpdc.2025.105168
M3 - Article
AN - SCOPUS:105013859133
SN - 0743-7315
VL - 206
JO - Journal of Parallel and Distributed Computing
JF - Journal of Parallel and Distributed Computing
M1 - 105168
ER -