Do you want to publish a course? Click here

GLRT-Optimal Noncoherent Lattice Decoding

307   0   0.0 ( 0 )
 Added by Daniel Ryan
 Publication date 2007
and research's language is English




Ask ChatGPT about the research

This paper presents new low-complexity lattice-decoding algorithms for noncoherent block detection of QAM and PAM signals over complex-valued fading channels. The algorithms are optimal in terms of the generalized likelihood ratio test (GLRT). The computational complexity is polynomial in the block length; making GLRT-optimal noncoherent detection feasible for implementation. We also provide even lower complexity suboptimal algorithms. Simulations show that the suboptimal algorithms have performance indistinguishable from the optimal algorithms. Finally, we consider block based transmission, and propose to use noncoherent detection as an alternative to pilot assisted transmission (PAT). The new technique is shown to outperform PAT.



rate research

Read More

Discrete-time Rayleigh fading single-input single-output (SISO) and multiple-input multiple-output (MIMO) channels are considered, with no channel state information at the transmitter or the receiver. The fading is assumed to be stationary and correlated in time, but independent from antenna to antenna. Peak-power and average-power constraints are imposed on the transmit antennas. For MIMO channels, these constraints are either imposed on the sum over antennas, or on each individual antenna. For SISO channels and MIMO channels with sum power constraints, the asymptotic capacity as the peak signal-to-noise ratio tends to zero is identified; for MIMO channels with individual power constraints, this asymptotic capacity is obtained for a class of channels called transmit separable channels. The results for MIMO channels with individual power constraints are carried over to SISO channels with delay spread (i.e. frequency selective fading).
This paper investigates noncoherent detection in a two-way relay channel operated with physical layer network coding (PNC), assuming FSK modulation and short-packet transmissions. For noncoherent detection, the detector has access to the magnitude but not the phase of the received signal. For conventional communication in which a receiver receives the signal from a transmitter only, the phase does not affect the magnitude, hence the performance of the noncoherent detector is independent of the phase. PNC, however, is a multiuser system in which a receiver receives signals from multiple transmitters simultaneously. The relative phase of the signals from different transmitters affects the received signal magnitude through constructive-destructive interference. In particular, for good performance, the noncoherent detector in PNC must take into account the influence of the relative phase on the signal magnitude. Building on this observation, this paper delves into the fundamentals of PNC noncoherent detector design. To avoid excessive overhead, we do away from preambles. We show how the relative phase can be deduced directly from the magnitudes of the received data symbols. Numerical results show that our detector performs nearly as well as a fictitious optimal detector that has perfect knowledge of the channel gains and relative phase.
This work presents a new resource allocation optimization framework for cellular networks using neighborhood-based optimization. Under this optimization framework resources are allocated within virtual cells encompassing several base-stations and the users within their coverage area. Incorporating the virtual cell concept enables the utilization of more sophisticated cooperative communication schemes such as coordinated multi-point decoding. We form the virtual cells using hierarchical clustering given a particular number of such cells. Once the virtual cells are formed, we consider a cooperative decoding scheme in which the base-stations in each virtual cell jointly decode the signals that they receive. We propose an iterative solution for the resource allocation problem resulting from the cooperative decoding within each virtual cell. Numerical results for the average system sum rate of our network design under hierarchical clustering are presented. These results indicate that virtual cells with neighborhood-based optimization leads to significant gains in sum rate over optimization within each cell, yet may also have a significant sum-rate penalty compared to fully-centralized optimization.
An uplink system with a single antenna transmitter and a single receiver with a large number of antennas is considered. We propose an energy-detection-based single-shot noncoherent communication scheme which does not use the instantaneous channel state information (CSI), but rather only the knowledge of the channel statistics. The suggested system uses a transmitter that modulates information on the power of the symbols, and a receiver which measures only the average energy across the antennas. We propose constellation designs which are asymptotically optimal with respect to symbol error rate (SER) with an increasing number of antennas, for any finite signal to noise ratio (SNR) at the receiver, under different assumptions on the availability of CSI statistics (exact channel fading distribution or the first few moments of the channel fading distribution). We also consider the case of imperfect knowledge of the channel statistics and describe in detail the case when there is a bounded uncertainty on the moments of the fading distribution. We present numerical results on the SER performance achieved by these designs in typical scenarios and find that they may outperform existing noncoherent constellations, e.g., conventional Amplitude Shift Keying (ASK), and pilot-based schemes, e.g., Pulse Amplitude Modulation (PAM). We also observe that an optimized constellation for a specific channel distribution makes it very sensitive to uncertainties in the channel statistics. In particular, constellation designs based on optimistic channel conditions could lead to significant performance degradation in terms of the achieved symbol error rates.
118 - Lucky Galvez , Jon-Lark Kim 2019
Practically good error-correcting codes should have good parameters and efficient decoding algorithms. Some algebraically defined good codes such as cyclic codes, Reed-Solomon codes, and Reed-Muller codes have nice decoding algorithms. However, many optimal linear codes do not have an efficient decoding algorithm except for the general syndrome decoding which requires a lot of memory. Therefore, it is a natural question whether which optimal linear codes have an efficient decoding. We show that two binary optimal $[36,19,8]$ linear codes and two binary optimal $[40,22,8]$ codes have an efficient decoding algorithm. There was no known efficient decoding algorithm for the binary optimal $[36,19,8]$ and $[40,22,8]$ codes. We project them onto the much shorter length linear $[9,5,4]$ and $[10, 6, 4]$ codes over $GF(4)$, respectively. This decoding algorithms, called {em projection decoding}, can correct errors of weight up to 3. These $[36,19,8]$ and $[40,22,8]$ codes respectively have more codewords than any optimal self-dual $[36, 18, 8]$ and $[40,20,8]$ codes for given length and minimum weight, implying that these codes more practical.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا