ترغب بنشر مسار تعليمي؟ اضغط هنا

Multi-task Over-the-Air Federated Learning: A Non-Orthogonal Transmission Approach

82   0   0.0 ( 0 )
 نشر من قبل Dian Fan
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this letter, we propose a multi-task over-theair federated learning (MOAFL) framework, where multiple learning tasks share edge devices for data collection and learning models under the coordination of a edge server (ES). Specially, the model updates for all the tasks are transmitted and superpositioned concurrently over a non-orthogonal uplink channel via over-the-air computation, and the aggregation results of all the tasks are reconstructed at the ES through an extended version of the turbo compressed sensing algorithm. Both the convergence analysis and numerical results demonstrate that the MOAFL framework can significantly reduce the uplink bandwidth consumption of multiple tasks without causing substantial learning performance degradation.



قيم البحث

اقرأ أيضاً

With the aim of integrating over-the-air federated learning (AirFL) and non-orthogonal multiple access (NOMA) into an on-demand universal framework, this paper proposes a novel reconfigurable intelligent surface (RIS)-aided hybrid network by leveragi ng the RIS to flexibly adjust the signal processing order of heterogeneous data. The objective of this work is to maximize the achievable hybrid rate by jointly optimizing the transmit power, controlling the receive scalar, and designing the phase shifts. Since the concurrent transmissions of all computation and communication signals are aided by the discrete phase shifts at the RIS, the considered problem (P0) is a challenging mixed integer programming problem. To tackle this intractable issue, we decompose the original problem (P0) into a non-convex problem (P1) and a combinatorial problem (P2), which are characterized by the continuous and discrete variables, respectively. For the transceiver design problem (P1), the power allocation subproblem is first solved by invoking the difference-of-convex programming, and then the receive control subproblem is addressed by using the successive convex approximation, where the closed-form expressions of simplified cases are derived to obtain deep insights. For the reflection design problem (P2), the relaxation-then-quantization method is adopted to find a suboptimal solution for striking a trade-off between complexity and performance. Afterwards, an alternating optimization algorithm is developed to solve the non-linear and non-convex problem (P0) iteratively. Finally, simulation results reveal that 1) the proposed RIS-aided hybrid network can support the on-demand communication and computation efficiently, 2) the performance gains can be improved by properly selecting the location of the RIS, and 3) the designed algorithms are also applicable to conventional networks with only AirFL or NOMA users.
313 - Xiang Ma , Haijian Sun , Qun Wang 2021
A new machine learning (ML) technique termed as federated learning (FL) aims to preserve data at the edge devices and to only exchange ML model parameters in the learning process. FL not only reduces the communication needs but also helps to protect the local privacy. Although FL has these advantages, it can still experience large communication latency when there are massive edge devices connected to the central parameter server (PS) and/or millions of model parameters involved in the learning process. Over-the-air computation (AirComp) is capable of computing while transmitting data by allowing multiple devices to send data simultaneously by using analog modulation. To achieve good performance in FL through AirComp, user scheduling plays a critical role. In this paper, we investigate and compare different user scheduling policies, which are based on various criteria such as wireless channel conditions and the significance of model updates. Receiver beamforming is applied to minimize the mean-square-error (MSE) of the distortion of function aggregation result via AirComp. Simulation results show that scheduling based on the significance of model updates has smaller fluctuations in the training process while scheduling based on channel condition has the advantage on energy efficiency.
Machine learning and wireless communication technologies are jointly facilitating an intelligent edge, where federated edge learning (FEEL) is a promising training framework. As wireless devices involved in FEEL are resource limited in terms of commu nication bandwidth, computing power and battery capacity, it is important to carefully schedule them to optimize the training performance. In this work, we consider an over-the-air FEEL system with analog gradient aggregation, and propose an energy-aware dynamic device scheduling algorithm to optimize the training performance under energy constraints of devices, where both communication energy for gradient aggregation and computation energy for local training are included. The consideration of computation energy makes dynamic scheduling challenging, as devices are scheduled before local training, but the communication energy for over-the-air aggregation depends on the l2-norm of local gradient, which is known after local training. We thus incorporate estimation methods into scheduling to predict the gradient norm. Taking the estimation error into account, we characterize the performance gap between the proposed algorithm and its offline counterpart. Experimental results show that, under a highly unbalanced local data distribution, the proposed algorithm can increase the accuracy by 4.9% on CIFAR-10 dataset compared with the myopic benchmark, while satisfying the energy constraints.
Many problems in machine learning rely on multi-task learning (MTL), in which the goal is to solve multiple related machine learning tasks simultaneously. MTL is particularly relevant for privacy-sensitive applications in areas such as healthcare, fi nance, and IoT computing, where sensitive data from multiple, varied sources are shared for the purpose of learning. In this work, we formalize notions of task-level privacy for MTL via joint differential privacy(JDP), a relaxation of differential privacy for mechanism design and distributed optimization. We then propose an algorithm for mean-regularized MTL, an objective commonly used for applications in personalized federated learning, subject to JDP. We analyze our objective and solver, providing certifiable guarantees on both privacy and utility. Empirically, we find that our method allows for improved privacy/utility trade-offs relative to global baselines across common federated learning benchmarks.
Federated multi-task learning (FMTL) has emerged as a natural choice to capture the statistical diversity among the clients in federated learning. To unleash the potential of FMTL beyond statistical diversity, we formulate a new FMTL problem FedU usi ng Laplacian regularization, which can explicitly leverage relationships among the clients for multi-task learning. We first show that FedU provides a unified framework covering a wide range of problems such as conventional federated learning, personalized federated learning, few-shot learning, and stratified model learning. We then propose algorithms including both communication-centralized and decentralized schemes to learn optimal models of FedU. Theoretically, we show that the convergence rates of both FedUs algorithms achieve linear speedup for strongly convex and sublinear speedup of order $1/2$ for nonconvex objectives. While the analysis of FedU is applicable to both strongly convex and nonconvex loss functions, the conventional FMTL algorithm MOCHA, which is based on CoCoA framework, is only applicable to convex case. Experimentally, we verify that FedU outperforms the vanilla FedAvg, MOCHA, as well as pFedMe and Per-FedAvg in personalized federated learning.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا