Lattice reduction (LR) is a preprocessing technique for multiple-input multiple-output (MIMO) symbol detection to achieve better bit error-rate (BER) performance. In this paper, we propose a customized homogeneous multiprocessor for LR. The processor cores are based on transport triggered architecture (TTA). We propose some modification of the popular LR algorithm, Lenstra-Lenstra-Lovasz (LLL) for high throughput. The TTA cores are programmed with high level language. Each TTA core consists of several special function units to accelerate the program code. The multiprocessor takes 187 cycles to reduce a single matrix for LR. The architecture is synthesized on 90 nm technology and takes 405 kgates at 210 MHz.
A new architecture called integer-forcing (IF) linear receiver has been recently proposed for multiple-input multiple-output (MIMO) fading channels, wherein an appropriate integer linear combination of the received symbols has to be computed as a part of the decoding process. In this paper, we propose a method based on Hermite-Korkine-Zolotareff (HKZ) and Minkowski lattice basis reduction algorithms to obtain the integer coefficients for the IF receiver. We show that the proposed method provides a lower bound on the ergodic rate, and achieves the full receive diversity. Suitability of complex Lenstra-Lenstra-Lovasz (LLL) lattice reduction algorithm (CLLL) to solve the problem is also investigated. Furthermore, we establish the connection between the proposed IF linear receivers and lattice reduction-aided MIMO detectors (with equivalent complexity), and point out the advantages of the former class of receivers over the latter. For the $2 times 2$ and $4times 4$ MIMO channels, we compare the coded-block error rate and bit error rate of the proposed approach with that of other linear receivers. Simulation results show that the proposed approach outperforms the zero-forcing (ZF) receiver, minimum mean square error (MMSE) receiver, and the lattice reduction-aided MIMO detectors.
Lattice reduction is a popular preprocessing strategy in multiple-input multiple-output (MIMO) detection. In a quest for developing a low-complexity reduction algorithm for large-scale problems, this paper investigates a new framework called sequential reduction (SR), which aims to reduce the lengths of all basis vectors. The performance upper bounds of the strongest reduction in SR are given when the lattice dimension is no larger than 4. The proposed new framework enables the implementation of a hash-based low-complexity lattice reduction algorithm, which becomes especially tempting when applied to large-scale MIMO detection. Simulation results show that, compared to other reduction algorithms, the hash-based SR algorithm exhibits the lowest complexity while maintaining comparable error performance.
In this paper, we propose a model-driven deep learning network for multiple-input multiple-output (MIMO) detection. The structure of the network is specially designed by unfolding the iterative algorithm. Some trainable parameters are optimized through deep learning techniques to improve the detection performance. Since the number of trainable variables of the network is equal to that of the layers, the network can be easily trained within a very short time. Furthermore, the network can handle time-varying channel with only a single training. Numerical results show that the proposed approach can improve the performance of the iterative algorithm significantly under Rayleigh and correlated MIMO channels.
We propose a deep-learning approach for the joint MIMO detection and channel decoding problem. Conventional MIMO receivers adopt a model-based approach for MIMO detection and channel decoding in linear or iterative manners. However, due to the complex MIMO signal model, the optimal solution to the joint MIMO detection and channel decoding problem (i.e., the maximum likelihood decoding of the transmitted codewords from the received MIMO signals) is computationally infeasible. As a practical measure, the current model-based MIMO receivers all use suboptimal MIMO decoding methods with affordable computational complexities. This work applies the latest advances in deep learning for the design of MIMO receivers. In particular, we leverage deep neural networks (DNN) with supervised training to solve the joint MIMO detection and channel decoding problem. We show that DNN can be trained to give much better decoding performance than conventional MIMO receivers do. Our simulations show that a DNN implementation consisting of seven hidden layers can outperform conventional model-based linear or iterative receivers. This performance improvement points to a new direction for future MIMO receiver design.
We analyze the performance of multiple input/multiple output (MIMO) communications systems employing spatial multiplexing and zero-forcing detection (ZF). The distribution of the ZF signal-to-noise ratio (SNR) is characterized when either the intended stream or interfering streams experience Rician fading, and when the fading may be correlated on the transmit side. Previously, exact ZF analysis based on a well-known SNR expression has been hindered by the noncentrality of the Wishart distribution involved. In addition, approximation with a central-Wishart distribution has not proved consistently accurate. In contrast, the following exact ZF study proceeds from a lesser-known SNR expression that separates the intended and interfering channel-gain vectors. By first conditioning on, and then averaging over the interference, the ZF SNR distribution for Rician-Rayleigh fading is shown to be an infinite linear combination of gamma distributions. On the other hand, for Rayleigh-Rician fading, the ZF SNR is shown to be gamma-distributed. Based on the SNR distribution, we derive new series expressions for the ZF average error probability, outage probability, and ergodic capacity. Numerical results confirm the accuracy of our new expressions, and reveal effects of interference and channel statistics on performance.