ترغب بنشر مسار تعليمي؟ اضغط هنا

In cell-free massive MIMO networks, an efficient distributed detection algorithm is of significant importance. In this paper, we propose a distributed expectation propagation (EP) detector for cell-free massive MIMO. The detector is composed of two m odules, a nonlinear module at the central processing unit (CPU) and a linear module at the access point (AP). The turbo principle in iterative decoding is utilized to compute and pass the extrinsic information between modules. An analytical framework is then provided to characterize the asymptotic performance of the proposed EP detector with a large number of antennas. Simulation results will show that the proposed method outperforms the distributed detectors in terms of bit-error-rate.
112 - Hengtao He , Rui Wang , Weijie Jin 2020
Millimeter-wave (mmWave) communications have been one of the promising technologies for future wireless networks that integrate a wide range of data-demanding applications. To compensate for the large channel attenuation in mmWave band and avoid high hardware cost, a lens-based beamspace massive multiple-input multiple-output (MIMO) system is considered. However, the beam squint effect in wideband mmWave systems makes channel estimation very challenging, especially when the receiver is equipped with a limited number of radio-frequency (RF) chains. Furthermore, the real channel data cannot be obtained before the mmWave system is used in a new environment, which makes it impossible to train a deep learning (DL)-based channel estimator using real data set beforehand. To solve the problem, we propose a model-driven unsupervised learning network, named learned denoising-based generalized expectation consistent (LDGEC) signal recovery network. By utilizing the Steins unbiased risk estimator loss, the LDGEC network can be trained only with limited measurements corresponding to the pilot symbols, instead of the real channel data. Even if designed for unsupervised learning, the LDGEC network can be supervisingly trained with the real channel via the denoiser-by-denoiser way. The numerical results demonstrate that the LDGEC-based channel estimator significantly outperforms state-of-the-art compressive sensing-based algorithms when the receiver is equipped with a small number of RF chains and low-resolution ADCs.
Massive multiuser multiple-input multiple-output (MU-MIMO) has been the mainstream technology in fifth-generation wireless systems. To reduce high hardware costs and power consumption in massive MU-MIMO, low-resolution digital-to-analog converters (D AC) for each antenna and radio frequency (RF) chain in downlink transmission is used, which brings challenges for precoding design. To circumvent these obstacles, we develop a model-driven deep learning (DL) network for massive MU-MIMO with finite-alphabet precoding in this article. The architecture of the network is specially designed by unfolding an iterative algorithm. Compared with the traditional state-of-the-art techniques, the proposed DL-based precoder shows significant advantages in performance, complexity, and robustness to channel estimation error under Rayleigh fading channel.
In this paper, we investigate the model-driven deep learning (DL) for MIMO detection. In particular, the MIMO detector is specially designed by unfolding an iterative algorithm and adding some trainable parameters. Since the number of trainable param eters is much fewer than the data-driven DL based signal detector, the model-driven DL based MIMO detector can be rapidly trained with a much smaller data set. The proposed MIMO detector can be extended to soft-input soft-output detection easily. Furthermore, we investigate joint MIMO channel estimation and signal detection (JCESD), where the detector takes channel estimation error and channel statistics into consideration while channel estimation is refined by detected data and considers the detection error. Based on numerical results, the model-driven DL based MIMO detector significantly improves the performance of corresponding traditional iterative detector, outperforms other DL-based MIMO detectors and exhibits superior robustness to various mismatches.
In this paper, we propose a model-driven deep learning network for multiple-input multiple-output (MIMO) detection. The structure of the network is specially designed by unfolding the iterative algorithm. Some trainable parameters are optimized throu gh deep learning techniques to improve the detection performance. Since the number of trainable variables of the network is equal to that of the layers, the network can be easily trained within a very short time. Furthermore, the network can handle time-varying channel with only a single training. Numerical results show that the proposed approach can improve the performance of the iterative algorithm significantly under Rayleigh and correlated MIMO channels.
Intelligent communication is gradually considered as the mainstream direction in future wireless communications. As a major branch of machine learning, deep learning (DL) has been applied in physical layer communications and has demonstrated an impre ssive performance improvement in recent years. However, most of the existing works related to DL focus on data-driven approaches, which consider the communication system as a black box and train it by using a huge volume of data. Training a network requires sufficient computing resources and extensive time, both of which are rarely found in communication devices. By contrast, model-driven DL approaches combine communication domain knowledge with DL to reduce the demand for computing resources and training time. This article reviews the recent advancements in the application of model-driven DL approaches in physical layer communications, including transmission scheme, receiver design, and channel information recovery. Several open issues for further research are also highlighted after presenting the comprehensive survey.
420 - Hengtao He , Chao-Kai Wen , 2018
Hybrid analog-digital precoding architectures and low-resolution analog-to-digital converter (ADC) receivers are two solutions to reduce hardware cost and power consumption for millimeter wave (mmWave) multiple-input multiple-output (MIMO) communicat ion systems with large antenna arrays. In this study, we consider a mmWave MIMO-OFDM receiver with a generalized hybrid architecture in which a small number of radio-frequency (RF) chains and low-resolution ADCs are employed simultaneously. Owing to the strong nonlinearity introduced by low-resolution ADCs, the task of data detection is challenging, particularly achieving a Bayesian optimal data detector. This study aims to fill this gap. By using generalized expectation consistent signal recovery technique, we propose a computationally efficient data detection algorithm that provides a minimum mean-square error estimate on data symbols and is extended to a mixed-ADC architecture. Considering particular structure of MIMO-OFDM channel matirx, we provide a lowcomplexity realization in which only FFT operation and matrixvector multiplications are required. Furthermore, we present an analytical framework to study the theoretical performance of the detector in the large-system limit, which can precisely evaluate the performance expressions such as mean-square error and symbol error rate. Based on this optimal detector, the potential of adding a few low-resolution RF chains and high-resolution ADCs for mixed-ADC architecture is investigated. Simulation results confirm the accuracy of our theoretical analysis and can be used for system design rapidly. The results reveal that adding a few low-resolution RF chains to original unquantized systems can obtain significant gains.
Channel estimation is very challenging when the receiver is equipped with a limited number of radio-frequency (RF) chains in beamspace millimeter-wave (mmWave) massive multiple-input and multiple-output systems. To solve this problem, we exploit a le arned denoising-based approximate message passing (LDAMP) network. This neural network can learn channel structure and estimate channel from a large number of training data. Furthermore, we provide an analytical framework on the asymptotic performance of the channel estimator. Based on our analysis and simulation results, the LDAMP neural network significantly outperforms state-of-the-art compressed sensingbased algorithms even when the receiver is equipped with a small number of RF chains. Therefore, deep learning is a powerful tool for channel estimation in mmWave communications.
In this paper, we propose a generalized expectation consistent signal recovery algorithm to estimate the signal $mathbf{x}$ from the nonlinear measurements of a linear transform output $mathbf{z}=mathbf{A}mathbf{x}$. This estimation problem has been encountered in many applications, such as communications with front-end impairments, compressed sensing, and phase retrieval. The proposed algorithm extends the prior art called generalized turbo signal recovery from a partial discrete Fourier transform matrix $mathbf{A}$ to a class of general matrices. Numerical results show the excellent agreement of the proposed algorithm with the theoretical Bayesian-optimal estimator derived using the replica method.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا