ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Learning-Enabled Joint Antenna Selection and Precoding Design: From Offline Complexity to Online Performance

88   0   0.0 ( 0 )
 نشر من قبل Thang X. Vu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate the performance of multi-user multiple-antenna downlink systems in which a BS serves multiple users via a shared wireless medium. In order to fully exploit the spatial diversity while minimizing the passive energy consumed by radio frequency (RF) components, the BS is equipped with M RF chains and N antennas, where M < N. Upon receiving pilot sequences to obtain the channel state information, the BS determines the best subset of M antennas for serving the users. We propose a joint antenna selection and precoding design (JASPD) algorithm to maximize the system sum rate subject to a transmit power constraint and QoS requirements. The JASPD overcomes the non-convexity of the formulated problem via a doubly iterative algorithm, in which an inner loop successively optimizes the precoding vectors, followed by an outer loop that tries all valid antenna subsets. Although approaching the (near) global optimality, the JASPD suffers from a combinatorial complexity, which may limit its application in real-time network operations. To overcome this limitation, we propose a learning-based antenna selection and precoding design algorithm (L-ASPA), which employs a DNN to establish underlaying relations between the key system parameters and the selected antennas. The proposed L-ASPD is robust against the number of users and their locations, BSs transmit power, as well as the small-scale channel fading. With a well-trained learning model, it is shown that the L-ASPD significantly outperforms baseline schemes based on the block diagonalization and a learning-assisted solution for broadcasting systems and achieves higher effective sum rate than that of the JASPA under limited processing time. In addition, we observed that the proposed L-ASPD can reduce the computation complexity by 95% while retaining more than 95% of the optimal performance.



قيم البحث

اقرأ أيضاً

Large-scale antenna (LSA) has gained a lot of attention recently since it can significantly improve the performance of wireless systems. Similar to multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) or MIMO-OFDM, LSA can be also combined with OFDM to deal with frequency selectivity in wireless channels. However, such combination suffers from substantially increased complexity proportional to the number of antennas in LSA systems. For the conventional implementation of LSA-OFDM, the number of inverse fast Fourier transforms (IFFTs) increases with the antenna number since each antenna requires an IFFT for OFDM modulation. Furthermore, zero-forcing (ZF) precoding is required in LSA systems to support more users, and the required matrix inversion leads to a huge computational burden. In this paper, we propose a low-complexity recursive convolutional precoding to address the issues above. The traditional ZF precoding can be implemented through the recursive convolutional precoding in the time domain so that only one IFFT is required for each user and the matrix inversion can be also avoided. Simulation results show that the proposed approach can achieve the same performance as that of ZF but with much lower complexity.
Large number of antennas and radio frequency (RF) chains at the base stations (BSs) lead to high energy consumption in massive MIMO systems. Thus, how to improve the energy efficiency (EE) with a computationally efficient approach is a significant ch allenge in the design of massive MIMO systems. With this motivation, a learning-based stochastic gradient descent algorithm is proposed in this paper to obtain the optimal joint uplink and downlink EE with joint antenna selection and user scheduling in single-cell massive MIMO systems. Using Jensens inequality and the characteristics of wireless channels, a lower bound on the system throughput is obtained. Subsequently, incorporating the power consumption model, the corresponding lower bound on the EE of the system is identified. Finally, learning-based stochastic gradient descent method is used to solve the joint antenna selection and user scheduling problem, which is a combinatorial optimization problem. Rare event simulation is embedded in the learning-based stochastic gradient descent method to generate samples with very small probabilities. In the analysis, both perfect and imperfect channel side information (CSI) at the BS are considered. Minimum mean-square error (MMSE) channel estimation is employed in the study of the imperfect CSI case. In addition, the effect of a constraint on the number of available RF chains in massive MIMO system is investigated considering both perfect and imperfect CSI at the BS.
To improve national security, government agencies have long been committed to enforcing powerful surveillance measures on suspicious individuals or communications. In this paper, we consider a wireless legitimate surveillance system, where a full-dup lex multi-antenna legitimate monitor aims to eavesdrop on a dubious communication link between a suspicious pair via proactive jamming. Assuming that the legitimate monitor can successfully overhear the suspicious information only when its achievable data rate is no smaller than that of the suspicious receiver, the key objective is to maximize the eavesdropping non-outage probability by joint design of the jamming power, receive and transmit beamformers at the legitimate monitor. Depending on the number of receive/transmit antennas implemented, i.e., single-input single-output, single-input multiple-output, multiple-input single-output and multiple-input multiple-output (MIMO), four different scenarios are investigated. For each scenario, the optimal jamming power is derived in closed-form and efficient algorithms are obtained for the optimal transmit/receive beamforming vectors. Moreover, low-complexity suboptimal beamforming schemes are proposed for the MIMO case. Our analytical findings demonstrate that by exploiting multiple antennas at the legitimate monitor, the eavesdropping non-outage probability can be significantly improved compared to the single antenna case. In addition, the proposed suboptimal transmit zero-forcing scheme yields similar performance as the optimal scheme.
Joint user selection (US) and vector precoding (US-VP) is proposed for multiuser multiple-input multiple-output (MU-MIMO) downlink. The main difference between joint US-VP and conventional US is that US depends on data symbols for joint US-VP, wherea s conventional US is independent of data symbols. The replica method is used to analyze the performance of joint US-VP in the large-system limit, where the numbers of transmit antennas, users, and selected users tend to infinity while their ratios are kept constant. The analysis under the assumptions of replica symmetry (RS) and 1-step replica symmetry breaking (1RSB) implies that optimal data-independent US provides nothing but the same performance as random US in the large-system limit, whereas data-independent US is capacity-achieving as only the number of users tends to infinity. It is shown that joint US-VP can provide a substantial reduction of the energy penalty in the large-system limit. Consequently, joint US-VP outperforms separate US-VP in terms of the achievable sum rate, which consists of a combination of vector precoding (VP) and data-independent US. In particular, data-dependent US can be applied to general modulation, and implemented with a greedy algorithm.
Climate models are complicated software systems that approximate atmospheric and oceanic fluid mechanics at a coarse spatial resolution. Typical climate forecasts only explicitly resolve processes larger than 100 km and approximate any process occurr ing below this scale (e.g. thunderstorms) using so-called parametrizations. Machine learning could improve upon the accuracy of some traditional physical parametrizations by learning from so-called global cloud-resolving models. We compare the performance of two machine learning models, random forests (RF) and neural networks (NNs), at parametrizing the aggregate effect of moist physics in a 3 km resolution global simulation with an atmospheric model. The NN outperforms the RF when evaluated offline on a testing dataset. However, when the ML models are coupled to an atmospheric model run at 200 km resolution, the NN-assisted simulation crashes with 7 days, while the RF-assisted simulations remain stable. Both runs produce more accurate weather forecasts than a baseline configuration, but globally averaged climate variables drift over longer timescales.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا