Do you want to publish a course? Click here

Minimum Expected Distortion in Gaussian Source Coding with Fading Side Information

152   0   0.0 ( 0 )
 Added by Chris Ng
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

An encoder, subject to a rate constraint, wishes to describe a Gaussian source under squared error distortion. The decoder, besides receiving the encoders description, also observes side information consisting of uncompressed source symbol subject to slow fading and noise. The decoder knows the fading realization but the encoder knows only its distribution. The rate-distortion function that simultaneously satisfies the distortion constraints for all fading states was derived by Heegard and Berger. A layered encoding strategy is considered in which each codeword layer targets a given fading state. When the side-information channel has two discrete fading states, the expected distortion is minimized by optimally allocating the encoding rate between the two codeword layers. For multiple fading states, the minimum expected distortion is formulated as the solution of a convex optimization problem with linearly many variables and constraints. Through a limiting process on the primal and dual solutions, it is shown that single-layer rate allocation is optimal when the fading probability density function is continuous and quasiconcave (e.g., Rayleigh, Rician, Nakagami, and log-normal). In particular, under Rayleigh fading, the optimal single codeword layer targets the least favorable state as if the side information was absent.



rate research

Read More

A transmitter without channel state information (CSI) wishes to send a delay-limited Gaussian source over a slowly fading channel. The source is coded in superimposed layers, with each layer successively refining the description in the previous one. The receiver decodes the layers that are supported by the channel realization and reconstructs the source up to a distortion. In the limit of a continuum of infinite layers, the optimal power distribution that minimizes the expected distortion is given by the solution to a set of linear differential equations in terms of the density of the fading distribution. In the optimal power distribution, as SNR increases, the allocation over the higher layers remains unchanged; rather the extra power is allocated towards the lower layers. On the other hand, as the bandwidth ratio b (channel uses per source symbol) tends to zero, the power distribution that minimizes expected distortion converges to the power distribution that maximizes expected capacity. While expected distortion can be improved by acquiring CSI at the transmitter (CSIT) or by increasing diversity from the realization of independent fading paths, at high SNR the performance benefit from diversity exceeds that from CSIT, especially when b is large.
We consider the classic joint source-channel coding problem of transmitting a memoryless source over a memoryless channel. The focus of this work is on the long-standing open problem of finding the rate of convergence of the smallest attainable expected distortion to its asymptotic value, as a function of blocklength $n$. Our main result is that in general the convergence rate is not faster than $n^{-1/2}$. In particular, we show that for the problem of transmitting i.i.d uniform bits over a binary symmetric channels with Hamming distortion, the smallest attainable distortion (bit error rate) is at least $Omega(n^{-1/2})$ above the asymptotic value, if the ``bandwidth expansion ratio is above $1$.
This letter investigates a new class of index coding problems. One sender broadcasts packets to multiple users, each desiring a subset, by exploiting prior knowledge of linear combinations of packets. We refer to this class of problems as index coding with coded side-information. Our aim is to characterize the minimum index code length that the sender needs to transmit to simultaneously satisfy all user requests. We show that the optimal binary vector index code length is equal to the minimum rank (minrank) of a matrix whose elements consist of the sets of desired packet indices and side- information encoding matrices. This is the natural extension of matrix minrank in the presence of coded side information. Using the derived expression, we propose a greedy randomized algorithm to minimize the rank of the derived matrix.
A transmitter without channel state information (CSI) wishes to send a delay-limited Gaussian source over a slowly fading channel. The source is coded in superimposed layers, with each layer successively refining the description in the previous one. The receiver decodes the layers that are supported by the channel realization and reconstructs the source up to a distortion. The expected distortion is minimized by optimally allocating the transmit power among the source layers. For two source layers, the allocation is optimal when power is first assigned to the higher layer up to a power ceiling that depends only on the channel fading distribution; all remaining power, if any, is allocated to the lower layer. For convex distortion cost functions with convex constraints, the minimization is formulated as a convex optimization problem. In the limit of a continuum of infinite layers, the minimum expected distortion is given by the solution to a set of linear differential equations in terms of the density of the fading distribution. As the bandwidth ratio b (channel uses per source symbol) tends to zero, the power distribution that minimizes expected distortion converges to the one that maximizes expected capacity. While expected distortion can be improved by acquiring CSI at the transmitter (CSIT) or by increasing diversity from the realization of independent fading paths, at high SNR the performance benefit from diversity exceeds that from CSIT, especially when b is large.
This paper focuses on the structural properties of test channels, of Wyners operational information rate distortion function (RDF), $overline{R}(Delta_X)$, of a tuple of multivariate correlated, jointly independent and identically distributed Gaussian random variables (RVs), ${X_t, Y_t}_{t=1}^infty$, $X_t: Omega rightarrow {mathbb R}^{n_x}$, $Y_t: Omega rightarrow {mathbb R}^{n_y}$, with average mean-square error at the decoder, $frac{1}{n} {bf E}sum_{t=1}^n||X_t - widehat{X}_t||^2leq Delta_X$, when ${Y_t}_{t=1}^infty$ is the side information available to the decoder only. We construct optimal test channel realizations, which achieve the informational RDF, $overline{R}(Delta_X) triangleqinf_{{cal M}(Delta_X)} I(X;Z|Y)$, where ${cal M}(Delta_X)$ is the set of auxiliary RVs $Z$ such that, ${bf P}_{Z|X,Y}={bf P}_{Z|X}$, $widehat{X}=f(Y,Z)$, and ${bf E}{||X-widehat{X}||^2}leq Delta_X$. We show the fundamental structural properties: (1) Optimal test channel realizations that achieve the RDF, $overline{R}(Delta_X)$, satisfy conditional independence, $ {bf P}_{X|widehat{X}, Y, Z}={bf P}_{X|widehat{X},Y}={bf P}_{X|widehat{X}}, hspace{.2in} {bf E}Big{XBig|widehat{X}, Y, ZBig}={bf E}Big{XBig|widehat{X}Big}=widehat{X} $ and (2) similarly for the conditional RDF, ${R}_{X|Y}(Delta_X) triangleq inf_{{bf P}_{widehat{X}|X,Y}:{bf E}{||X-widehat{X}||^2} leq Delta_X} I(X; widehat{X}|Y)$, when ${Y_t}_{t=1}^infty$ is available to both the encoder and decoder, and the equality $overline{R}(Delta_X)={R}_{X|Y}(Delta_X)$.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا