ترغب بنشر مسار تعليمي؟ اضغط هنا

By exploiting the computing power and local data of distributed clients, federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy. In each communication round of FL, the clients up date local models based on their own data and upload their local updates via wireless channels. However, latency caused by hundreds to thousands of communication rounds remains a bottleneck in FL. To minimize the training latency, this work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients. Firstly, we propose a CS algorithm based on the upper confidence bound policy (CS-UCB) for ideal scenarios where local datasets of clients are independent and identically distributed (i.i.d.) and balanced. An upper bound of the expected performance regret of the proposed CS-UCB algorithm is provided, which indicates that the regret grows logarithmically over communication rounds. Then, to address non-ideal scenarios with non-i.i.d. and unbalanced properties of local datasets and varying availability of clients, we further propose a CS algorithm based on the UCB policy and virtual queue technique (CS-UCB-Q). An upper bound is also derived, which shows that the expected performance regret of the proposed CS-UCB-Q algorithm can have a sub-linear growth over communication rounds under certain conditions. Besides, the convergence performance of FL training is also analyzed. Finally, simulation results validate the efficiency of the proposed algorithms.
This paper studies fast downlink beamforming algorithms using deep learning in multiuser multiple-input-single-output systems where each transmit antenna at the base station has its own power constraint. We focus on the signal-to-interference-plus-no ise ratio (SINR) balancing problem which is quasi-convex but there is no efficient solution available. We first design a fast subgradient algorithm that can achieve near-optimal solution with reduced complexity. We then propose a deep neural network structure to learn the optimal beamforming based on convolutional networks and exploitation of the duality of the original problem. Two strategies of learning various dual variables are investigated with different accuracies, and the corresponding recovery of the original solution is facilitated by the subgradient algorithm. We also develop a generalization method of the proposed algorithms so that they can adapt to the varying number of users and antennas without re-training. We carry out intensive numerical simulations and testbed experiments to evaluate the performance of the proposed algorithms. Results show that the proposed algorithms achieve close to optimal solution in simulations with perfect channel information and outperform the alleged theoretically optimal solution in experiments, illustrating a better performance-complexity tradeoff than existing schemes.
Beamforming is evidently a core technology in recent generations of mobile communication networks. Nevertheless, an iterative process is typically required to optimize the parameters, making it ill-placed for real-time implementation due to high comp lexity and computational delay. Heuristic solutions such as zero-forcing (ZF) are simpler but at the expense of performance loss. Alternatively, deep learning (DL) is well understood to be a generalizing technique that can deliver promising results for a wide range of applications at much lower complexity if it is sufficiently trained. As a consequence, DL may present itself as an attractive solution to beamforming. To exploit DL, this article introduces general data- and model-driven beamforming neural networks (BNNs), presents various possible learning strategies, and also discusses complexity reduction for the DL-based BNNs. We also offer enhancement methods such as training-set augmentation and transfer learning in order to improve the generality of BNNs, accompanied by computer simulation results and testbed results showing the performance of such BNN solutions.
Beamforming is an effective means to improve the quality of the received signals in multiuser multiple-input-single-output (MISO) systems. Traditionally, finding the optimal beamforming solution relies on iterative algorithms, which introduces high c omputational delay and is thus not suitable for real-time implementation. In this paper, we propose a deep learning framework for the optimization of downlink beamforming. In particular, the solution is obtained based on convolutional neural networks and exploitation of expert knowledge, such as the uplink-downlink duality and the known structure of optimal solutions. Using this framework, we construct three beamforming neural networks (BNNs) for three typical optimization problems, i.e., the signal-to-interference-plus-noise ratio (SINR) balancing problem, the power minimization problem, and the sum rate maximization problem. For the former two problems the BNNs adopt the supervised learning approach, while for the sum rate maximization problem a hybrid method of supervised and unsupervised learning is employed. Simulation results show that the BNNs can achieve near-optimal solutions to the SINR balancing and power minimization problems, and a performance close to that of the weighted minimum mean squared error algorithm for the sum rate maximization problem, while in all cases enjoy significantly reduced computational complexity. In summary, this work paves the way for fast realization of optimal beamforming in multiuser MISO systems.
In this paper, we consider the network power minimization problem in a downlink cloud radio access network (C-RAN), taking into account the power consumed at the baseband unit (BBU) for computation and the power consumed at the remote radio heads and fronthaul links for transmission. The power minimization problem for transmission is a fast time-scale issue whereas the power minimization problem for computation is a slow time-scale issue. Therefore, the joint network power minimization problem is a mixed time-scale problem. To tackle the time-scale challenge, we introduce large system analysis to turn the original fast time-scale problem into a slow time-scale one that only depends on the statistical channel information. In addition, we propose a bound improving branch-and-bound algorithm and a combinational algorithm to find the optimal and suboptimal solutions to the power minimization problem for computation, respectively, and propose an iterative coordinate descent algorithm to find the solutions to the power minimization problem for transmission. Finally, a distributed algorithm based on hierarchical decomposition is proposed to solve the joint network power minimization problem. In summary, this work provides a framework to investigate how execution efficiency and computing capability at BBU as well as delay constraint of tasks can affect the network power minimization problem in C-RANs.
In this paper, we consider an uplink heterogeneous cloud radio access network (H-CRAN), where a macro base station (BS) coexists with many remote radio heads (RRHs). For cost-savings, only the BS is connected to the baseband unit (BBU) pool via fiber links. The RRHs, however, are associated with the BBU pool through wireless fronthaul links, which share the spectrum resource with radio access networks. Due to the limited capacity of fronthaul, the compress-and-forward scheme is employed, such as point-to-point compression or Wyner-Ziv coding. Different decoding strategies are also considered. This work aims to maximize the uplink ergodic sum-rate (SR) by jointly optimizing quantization noise matrix and bandwidth allocation between radio access networks and fronthaul links, which is a mixed time-scale issue. To reduce computational complexity and communication overhead, we introduce an approximation problem of the joint optimization problem based on large-dimensional random matrix theory, which is a slow time-scale issue because it only depends on statistical channel information. Finally, an algorithm based on Dinkelbachs algorithm is proposed to find the optimal solution to the approximate problem. In summary, this work provides an economic solution to the challenge of constrained fronthaul capacity, and also provides a framework with less computational complexity to study how bandwidth allocation and fronthaul compression can affect the SR maximization problem.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا