Do you want to publish a course? Click here

LEMO: Learn to Equalize for MIMO-OFDM Systems with Low-Resolution ADCs

93   0   0.0 ( 0 )
 Added by Lei Chu
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

This paper develops a new deep neural network optimized equalization framework for massive multiple input multiple output orthogonal frequency division multiplexing (MIMOOFDM) systems that employ low-resolution analog-to-digital converters (ADCs) at the base station (BS). The use of lowresolution ADCs could largely reduce hardware complexity and circuit power consumption, however, it makes the channel station information almost blind to the BS, hence causing difficulty in solving the equalization problem. In this paper, we consider a supervised learning architecture, where the goal is to learn a representative function that can predict the targets (constellation points) from the inputs (outputs of the low-resolution ADCs) based on the labeled training data (pilot signals). Especially, our main contributions are two-fold: 1) First, we design a new activation function, whose outputs are close to the constellation points when the parameters are finally optimized, to help us fully exploit the stochastic gradient descent method for the discrete optimization problem. 2) Second, an unsupervised loss is designed and then added to the optimization objective, aiming to enhance the representation ability (so-called generalization). Lastly, various experimental results confirm the superiority of the proposed equalizer over some existing ones, particularly when the statistics of the channel state information are unclear.

rate research

Read More

420 - Hengtao He , Chao-Kai Wen , 2018
Hybrid analog-digital precoding architectures and low-resolution analog-to-digital converter (ADC) receivers are two solutions to reduce hardware cost and power consumption for millimeter wave (mmWave) multiple-input multiple-output (MIMO) communication systems with large antenna arrays. In this study, we consider a mmWave MIMO-OFDM receiver with a generalized hybrid architecture in which a small number of radio-frequency (RF) chains and low-resolution ADCs are employed simultaneously. Owing to the strong nonlinearity introduced by low-resolution ADCs, the task of data detection is challenging, particularly achieving a Bayesian optimal data detector. This study aims to fill this gap. By using generalized expectation consistent signal recovery technique, we propose a computationally efficient data detection algorithm that provides a minimum mean-square error estimate on data symbols and is extended to a mixed-ADC architecture. Considering particular structure of MIMO-OFDM channel matirx, we provide a lowcomplexity realization in which only FFT operation and matrixvector multiplications are required. Furthermore, we present an analytical framework to study the theoretical performance of the detector in the large-system limit, which can precisely evaluate the performance expressions such as mean-square error and symbol error rate. Based on this optimal detector, the potential of adding a few low-resolution RF chains and high-resolution ADCs for mixed-ADC architecture is investigated. Simulation results confirm the accuracy of our theoretical analysis and can be used for system design rapidly. The results reveal that adding a few low-resolution RF chains to original unquantized systems can obtain significant gains.
This paper considers uplink massive multiple-input multiple-output (MIMO) systems with lowresolution analog-to-digital converters (ADCs) over Rician fading channels. Maximum-ratio-combining (MRC) and zero-forcing (ZF) receivers are considered under the assumption of perfect and imperfect channel state information (CSI). Low-resolution ADCs are considered for both data detection and channel estimation, and the resulting performance is analyzed. Asymptotic approximations of the spectrum efficiency (SE) for large systems are derived based on random matrix theory. With these results, we can provide insights into the trade-offs between the SE and the ADC resolution and study the influence of the Rician K-factors on the performance. It is shown that a large value of K-factors may lead to better performance and alleviate the influence of quantization noise on channel estimation. Moreover, we investigate the power scaling laws for both receivers under imperfect CSI and it shows that when the number of base station (BS) antennas is very large, without loss of SE performance, the transmission power can be scaled by the number of BS antennas for both receivers while the overall performance is limited by the resolution of ADCs. The asymptotic analysis is validated by numerical results. Besides, it is also shown that the SE gap between the two receivers is narrowed down when the K-factor is increased. We also show that ADCs with moderate resolutions lead to better energy efficiency (EE) than that with high-resolution or extremely low-resolution ADCs and using ZF receivers achieve higher EE as compared with the MRC receivers.
This paper considers a multipair amplify-and-forward massive MIMO relaying system with low-resolution ADCs at both the relay and destinations. The channel state information (CSI) at the relay is obtained via pilot training, which is then utilized to perform simple maximum-ratio combining/maximum-ratio transmission processing by the relay. Also, it is assumed that the destinations use statistical CSI to decode the transmitted signals. Exact and approximated closed-form expressions for the achievable sum rate are presented, which enable the efficient evaluation of the impact of key system parameters on the system performance. In addition, optimal relay power allocation scheme is studied, and power scaling law is characterized. It is found that, with only low-resolution ADCs at the relay, increasing the number of relay antennas is an effective method to compensate for the rate loss caused by coarse quantization. However, it becomes ineffective to handle the detrimental effect of low-resolution ADCs at the destination. Moreover, it is shown that deploying massive relay antenna arrays can still bring significant power savings, i.e., the transmit power of each source can be cut down proportional to $1/M$ to maintain a constant rate, where $M$ is the number of relay antennas.
In order to reduce hardware complexity and power consumption, massive multiple-input multiple-output (MIMO) systems employ low-resolution analog-to-digital converters (ADCs) to acquire quantized measurements $boldsymbol y$. This poses new challenges to the channel estimation problem, and the sparse prior on the channel coefficient vector $boldsymbol x$ in the angle domain is often used to compensate for the information lost during quantization. By interpreting the sparse prior from a probabilistic perspective, we can assume $boldsymbol x$ follows certain sparse prior distribution and recover it using approximate message passing (AMP). However, the distribution parameters are unknown in practice and need to be estimated. Due to the increased computational complexity in the quantization noise model, previous works either use an approximated noise model or manually tune the noise distribution parameters. In this paper, we treat both signals and parameters as random variables and recover them jointly within the AMP framework. The proposed approach leads to a much simpler parameter estimation method, allowing us to work with the quantization noise model directly. Experimental results show that the proposed approach achieves state-of-the-art performance under various noise levels and does not require parameter tuning, making it a practical and maintenance-free approach for channel estimation.
Orthogonal Time Frequency Space (OTFS) is a novel framework that processes modulation symbols via a time-independent channel characterized by the delay-Doppler domain. The conventional waveform, orthogonal frequency division multiplexing (OFDM), requires tracking frequency selective fading channels over the time, whereas OTFS benefits from full time-frequency diversity by leveraging appropriate equalization techniques. In this paper, we consider a neural network-based supervised learning framework for OTFS equalization. Learning of the introduced neural network is conducted in each OTFS frame fulfilling an online learning framework: the training and testing datasets are within the same OTFS-frame over the air. Utilizing reservoir computing, a special recurrent neural network, the resulting one-shot online learning is sufficiently flexible to cope with channel variations among different OTFS frames (e.g., due to the link/rank adaptation and user scheduling in cellular networks). The proposed method does not require explicit channel state information (CSI) and simulation results demonstrate a lower bit error rate (BER) than conventional equalization methods in the low signal-to-noise (SNR) regime under large Doppler spreads. When compared with its neural network-based counterparts for OFDM, the introduced approach for OTFS will lead to a better tradeoff between the processing complexity and the equalization performance.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا