ﻻ يوجد ملخص باللغة العربية
This paper investigates the transmission power control in over-the-air federated edge learning (Air-FEEL) system. Different from conventional power control designs (e.g., to minimize the individual mean squared error (MSE) of the over-the-air aggregation at each round), we consider a new power control design aiming at directly maximizing the convergence speed. Towards this end, we first analyze the convergence behavior of Air-FEEL (in terms of the optimality gap) subject to aggregation errors at different communication rounds. It is revealed that if the aggregation estimates are unbiased, then the training algorithm would converge exactly to the optimal point with mild conditions; while if they are biased, then the algorithm would converge with an error floor determined by the accumulated estimate bias over communication rounds. Next, building upon the convergence results, we optimize the power control to directly minimize the derived optimality gaps under both biased and unbiased aggregations, subject to a set of average and maximum power constraints at individual edge devices. We transform both problems into convex forms, and obtain their structured optimal solutions, both appearing in a form of regularized channel inversion, by using the Lagrangian duality method. Finally, numerical results show that the proposed power control policies achieve significantly faster convergence for Air-FEEL, as compared with benchmark policies with fixed power transmission or conventional MSE minimization.
Over-the-air federated edge learning (Air-FEEL) is a communication-efficient solution for privacy-preserving distributed learning over wireless networks. Air-FEEL allows one-shot over-the-air aggregation of gradient/model-updates by exploiting the wa
Federated learning (FL) as a promising edge-learning framework can effectively address the latency and privacy issues by featuring distributed learning at the devices and model aggregation in the central server. In order to enable efficient wireless
This paper studies a federated edge learning system, in which an edge server coordinates a set of edge devices to train a shared machine learning model based on their locally distributed data samples. During the distributed training, we exploit the j
Over-the-air computation (OAC) is a promising technique to realize fast model aggregation in the uplink of federated edge learning. OAC, however, hinges on accurate channel-gain precoding and strict synchronization among the edge devices, which are c
With the aim of integrating over-the-air federated learning (AirFL) and non-orthogonal multiple access (NOMA) into an on-demand universal framework, this paper proposes a novel reconfigurable intelligent surface (RIS)-aided hybrid network by leveragi