Do you want to publish a course? Click here

Low-complexity Voronoi shaping for the Gaussian channel

78   0   0.0 ( 0 )
 Added by Shen Li
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Voronoi constellations (VCs) are finite sets of vectors of a coding lattice enclosed by the translated Voronoi region of a shaping lattice, which is a sublattice of the coding lattice. In conventional VCs, the shaping lattice is a scaled-up version of the coding lattice. In this paper, we design low-complexity VCs with a cubic coding lattice of up to 32 dimensions, in which pseudo-Gray labeling is applied to minimize the bit error rate. The designed VCs have considerable shaping gains of up to 1.03 dB and finer choices of spectral efficiencies in practice. A mutual information estimation method and a log-likelihood approximation method based on importance sampling for very large constellations are proposed and applied to the designed VCs. With error-control coding, the proposed VCs can have higher achievable information rates than the conventional scaled VCs because of their inherently good pseudo-Gray labeling feature, with a lower decoding complexity.



rate research

Read More

End-to-end learning of communication systems with neural networks and particularly autoencoders is an emerging research direction which gained popularity in the last year. In this approach, neural networks learn to simultaneously optimize encoding and decoding functions to establish reliable message transmission. In this paper, this line of thinking is extended to communication scenarios in which an eavesdropper must further be kept ignorant about the communication. The secrecy of the transmission is achieved by utilizing a modified secure loss function based on cross-entropy which can be implemented with state-of-the-art machine-learning libraries. This secure loss function approach is applied in a Gaussian wiretap channel setup, for which it is shown that the neural network learns a trade-off between reliable communication and information secrecy by clustering learned constellations. As a result, an eavesdropper with higher noise cannot distinguish between the symbols anymore.
We investigate the special case of diamond relay comprising a Gaussian channel with identical frequency response between the user and the relays and fronthaul links with limited rate from the relays to the destination. We use the oblivious compress and forward (CF) with distributed compression and decode and forward (DF) where each relay decodes the whole message and sends half of its bits to the destination. We derive achievable rate by using time-sharing between DF and CF. The optimal time sharing proportion between DF and CF and power and rate allocations are different at each frequency and are fully determined.
We propose a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in our security proof is the flatness factor which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. We not only introduce the notion of secrecy-good lattices, but also propose the {flatness factor} as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and the genuine Gaussian channel are considered. In the latter case, we propose a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No textit{a priori} distribution of the message is assumed, and no dither is used in our proposed schemes.
Integer-forcing (IF) precoding, also known as downlink IF, is a promising new approach for communication over multiple-input multiple-output (MIMO) broadcast channels. Inspired by the integer-forcing linear receiver for multiple-access channels, it generalizes linear precoding by inducing an effective channel matrix that is approximately integer, rather than approximately identity. Combined with lattice encoding and a pre-inversion of the channel matrix at the transmitter, the scheme has the potential to outperform any linear precoding scheme, despite enjoying similar low complexity. In this paper, a specific IF precoding scheme, called diagonally-scaled exact IF (DIF), is proposed and shown to achieve maximum spatial multiplexing gain. For the special case of two receivers, in the high SNR regime, an optimal choice of parameters is derived analytically, leading to an almost closed-form expression for the achievable sum rate. In particular, it is shown that the gap to the sum capacity is upper bounded by 0.27 bits for any channel realization. For general SNR, a regularized version of DIF (RDIF) is proposed. Numerical results for two receivers under Rayleigh fading show that RDIF can achieve performance superior to optimal linear precoding and very close to the sum capacity.
147 - Tobias Koch 2014
This paper studies the capacity of the peak-and-average-power-limited Gaussian channel when its output is quantized using a dithered, infinite-level, uniform quantizer of step size $Delta$. It is shown that the capacity of this channel tends to that of the unquantized Gaussian channel when $Delta$ tends to zero, and it tends to zero when $Delta$ tends to infinity. In the low signal-to-noise ratio (SNR) regime, it is shown that, when the peak-power constraint is absent, the low-SNR asymptotic capacity is equal to that of the unquantized channel irrespective of $Delta$. Furthermore, an expression for the low-SNR asymptotic capacity for finite peak-to-average-power ratios is given and evaluated in the low- and high-resolution limit. It is demonstrated that, in this case, the low-SNR asymptotic capacity converges to that of the unquantized channel when $Delta$ tends to zero, and it tends to zero when $Delta$ tends to infinity. Comparing these results with achievability results for (undithered) 1-bit quantization, it is observed that the dither reduces capacity in the low-precision limit, and it reduces the low-SNR asymptotic capacity unless the peak-to-average-power ratio is unbounded.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا