Gradient Statistics Aware Power Control for Over-the-Air Federated Learning


Abstract in English

Federated learning (FL) is a promising technique that enables many edge devices to train a machine learning model collaboratively in wireless networks. By exploiting the superposition nature of wireless waveforms, over-the-air computation (AirComp) can accelerate model aggregation and hence facilitate communication-efficient FL. Due to channel fading, power control is crucial in AirComp. Prior works assume that the signals to be aggregated from each device, i.e., local gradients have identical statistics. In FL, however, gradient statistics vary over both training iterations and feature dimensions, and are unknown in advance. This paper studies the power control problem for over-the-air FL by taking gradient statistics into account. The goal is to minimize the aggregation error by optimizing the transmit power at each device subject to peak power constraints. We obtain the optimal policy in closed form when gradient statistics are given. Notably, we show that the optimal transmit power is continuous and monotonically decreases with the squared multivariate coefficient of variation (SMCV) of gradient vectors. We then propose a method to estimate gradient statistics with negligible communication cost. Experimental results demonstrate that the proposed gradient-statistics-aware power control achieves higher test accuracy than the existing schemes for a wide range of scenarios.

Download