ترغب بنشر مسار تعليمي؟ اضغط هنا

A Deep Learning Framework for Optimization of MISO Downlink Beamforming

157   0   0.0 ( 0 )
 نشر من قبل Wenchao Xia
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Beamforming is an effective means to improve the quality of the received signals in multiuser multiple-input-single-output (MISO) systems. Traditionally, finding the optimal beamforming solution relies on iterative algorithms, which introduces high computational delay and is thus not suitable for real-time implementation. In this paper, we propose a deep learning framework for the optimization of downlink beamforming. In particular, the solution is obtained based on convolutional neural networks and exploitation of expert knowledge, such as the uplink-downlink duality and the known structure of optimal solutions. Using this framework, we construct three beamforming neural networks (BNNs) for three typical optimization problems, i.e., the signal-to-interference-plus-noise ratio (SINR) balancing problem, the power minimization problem, and the sum rate maximization problem. For the former two problems the BNNs adopt the supervised learning approach, while for the sum rate maximization problem a hybrid method of supervised and unsupervised learning is employed. Simulation results show that the BNNs can achieve near-optimal solutions to the SINR balancing and power minimization problems, and a performance close to that of the weighted minimum mean squared error algorithm for the sum rate maximization problem, while in all cases enjoy significantly reduced computational complexity. In summary, this work paves the way for fast realization of optimal beamforming in multiuser MISO systems.



قيم البحث

اقرأ أيضاً

This paper investigates the optimal transmit beamforming design of simultaneous wireless information and power transfer (SWIPT) in the multiuser multiple-input-single-output (MISO) downlink with specific absorption rate (SAR) constraints. We consider the power splitting technique for SWIPT, where each receiver divides the received signal into two parts: one for information decoding and the other for energy harvesting with a practical non-linear rectification model. The problem of interest is to maximize as much as possible the received signal-to-interference-plus-noise ratio (SINR) and the energy harvested for all receivers, while satisfying the transmit power and the SAR constraints by optimizing the transmit beamforming at the transmitter and the power splitting ratios at different receivers. The optimal beamforming and power splitting solutions are obtained with the aid of semidefinite programming and bisection search. Low-complexity fixed beamforming and hybrid beamforming techniques are also studied. Furthermore, we study the effect of imperfect channel information and radiation matrices, and design robust beamforming to guarantee the worst-case performance. Simulation results demonstrate that our proposed algorithms can effectively deal with the radio exposure constraints and significantly outperform the conventional transmission scheme with power backoff.
We consider the problem of quantifying the Pareto optimal boundary in the achievable rate region over multiple-input single-output (MISO) interference channels, where the problem boils down to solving a sequence of convex feasibility problems after c ertain transformations. The feasibility problem is solved by two new distributed optimal beamforming algorithms, where the first one is to parallelize the computation based on the method of alternating projections, and the second one is to localize the computation based on the method of cyclic projections. Convergence proofs are established for both algorithms.
This paper studies fast adaptive beamforming optimization for the signal-to-interference-plus-noise ratio balancing problem in a multiuser multiple-input single-output downlink system. Existing deep learning based approaches to predict beamforming re ly on the assumption that the training and testing channels follow the same distribution which may not hold in practice. As a result, a trained model may lead to performance deterioration when the testing network environment changes. To deal with this task mismatch issue, we propose two offline adaptive algorithms based on deep transfer learning and meta-learning, which are able to achieve fast adaptation with the limited new labelled data when the testing wireless environment changes. Furthermore, we propose an online algorithm to enhance the adaptation capability of the offline meta algorithm in realistic non-stationary environments. Simulation results demonstrate that the proposed adaptive algorithms achieve much better performance than the direct deep learning algorithm without adaptation in new environments. The meta-learning algorithm outperforms the deep transfer learning algorithm and achieves near optimal performance. In addition, compared to the offline meta-learning algorithm, the proposed online meta-learning algorithm shows superior adaption performance in changing environments.
148 - Mengfan Liu , Rui Wang 2020
With the high development of wireless communication techniques, it is widely used in various fields for convenient and efficient data transmission. Different from commonly used assumption of the time-invariant wireless channel, we focus on the resear ch on the time-varying wireless downlink channel to get close to the practical situation. Our objective is to gain the maximum value of sum rate in the time-varying channel under the some constraints about cut-off signal-to-interference and noise ratio (SINR), transmitted power and beamforming. In order to adapt the rapid changing channel, we abandon the frequently used algorithm convex optimization and deep reinforcement learning algorithms are used in this paper. From the view of the ordinary measures such as power control, interference incoordination and beamforming, continuous changes of measures should be put into consideration while sparse reward problem due to the abortion of episodes as an important bottleneck should not be ignored. Therefore, with the analysis of relevant algorithms, we proposed two algorithms, Deep Deterministic Policy Gradient algorithm (DDPG) and hierarchical DDPG, in our work. As for these two algorithms, in order to solve the discrete output, DDPG is established by combining the Actor-Critic algorithm with Deep Q-learning (DQN), so that it can output the continuous actions without sacrificing the existed advantages brought by DQN and also can improve the performance. Also, to address the challenge of sparse reward, we take advantage of meta policy from the idea of hierarchical theory to divide one agent in DDPG into one meta-controller and one controller as hierarchical DDPG. Our simulation results demonstrate that the proposed DDPG and hierarchical DDPG performs well from the views of coverage, convergence and sum rate performance.
This paper studies fast downlink beamforming algorithms using deep learning in multiuser multiple-input-single-output systems where each transmit antenna at the base station has its own power constraint. We focus on the signal-to-interference-plus-no ise ratio (SINR) balancing problem which is quasi-convex but there is no efficient solution available. We first design a fast subgradient algorithm that can achieve near-optimal solution with reduced complexity. We then propose a deep neural network structure to learn the optimal beamforming based on convolutional networks and exploitation of the duality of the original problem. Two strategies of learning various dual variables are investigated with different accuracies, and the corresponding recovery of the original solution is facilitated by the subgradient algorithm. We also develop a generalization method of the proposed algorithms so that they can adapt to the varying number of users and antennas without re-training. We carry out intensive numerical simulations and testbed experiments to evaluate the performance of the proposed algorithms. Results show that the proposed algorithms achieve close to optimal solution in simulations with perfect channel information and outperform the alleged theoretically optimal solution in experiments, illustrating a better performance-complexity tradeoff than existing schemes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا