Do you want to publish a course? Click here

A lower bound on the differential entropy of log-concave random vectors with applications

54   0   0.0 ( 0 )
 Added by Arnaud Marsiglietti
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

We derive a lower bound on the differential entropy of a log-concave random variable $X$ in terms of the $p$-th absolute moment of $X$. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure $| x - hat x|^r$, and we establish that the difference between the rate distortion function and the Shannon lower bound is at most $log(sqrt{pi e}) approx 1.5$ bits, independently of $r$ and the target distortion $d$. For mean-square error distortion, the difference is at most $log (sqrt{frac{pi e}{2}}) approx 1$ bits, regardless of $d$. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of the Gaussian channel with the same noise power is at most $log (sqrt{frac{pi e}{2}}) approx 1$ bits. Our results generalize to the case of vector $X$ with possibly dependent coordinates, and to $gamma$-concave random variables. Our proof technique leverages tools from convex geometry.



rate research

Read More

Let $C$ and $K$ be centrally symmetric convex bodies of volume $1$ in ${mathbb R}^n$. We provide upper bounds for the multi-integral expression begin{equation*}|{bf t}|_{C^s,K}=int_{C}cdotsint_{C}Big|sum_{j=1}^st_jx_jBig|_K,dx_1cdots dx_send{equation*} in the case where $C$ is isotropic. Our approach provides an alternative proof of the sharp lower bound, due to Gluskin and V. Milman, for this quantity. We also present some applications to randomized vector balancing problems.
Let $x_1,ldots ,x_N$ be independent random points distributed according to an isotropic log-concave measure $mu $ on ${mathbb R}^n$, and consider the random polytope $$K_N:={rm conv}{ pm x_1,ldots ,pm x_N}.$$ We provide sharp estimates for the quermass{}integrals and other geometric parameters of $K_N$ in the range $cnls Nlsexp (n)$; these complement previous results from cite{DGT1} and cite{DGT} that were given for the range $cnls Nlsexp (sqrt{n})$. One of the basic new ingredients in our work is a recent result of E.~Milman that determines the mean width of the centroid body $Z_q(mu )$ of $mu $ for all $1ls qls n$.
We consider the classic joint source-channel coding problem of transmitting a memoryless source over a memoryless channel. The focus of this work is on the long-standing open problem of finding the rate of convergence of the smallest attainable expected distortion to its asymptotic value, as a function of blocklength $n$. Our main result is that in general the convergence rate is not faster than $n^{-1/2}$. In particular, we show that for the problem of transmitting i.i.d uniform bits over a binary symmetric channels with Hamming distortion, the smallest attainable distortion (bit error rate) is at least $Omega(n^{-1/2})$ above the asymptotic value, if the ``bandwidth expansion ratio is above $1$.
A closed-form expression for a lower bound on the per soliton capacity of the nonlinear optical fibre channel in the presence of (optical) amplifier spontaneous emission (ASE) noise is derived. This bound is based on a non-Gaussian conditional probability density function for the soliton amplitude jitter induced by the ASE noise and is proven to grow logarithmically as the signal-to-noise ratio increases.
A lower bound on the maximum likelihood (ML) decoding error exponent of linear block code ensembles, on the erasure channel, is developed. The lower bound turns to be positive, over an ensemble specific interval of erasure probabilities, when the ensemble weight spectral shape function tends to a negative value as the fractional codeword weight tends to zero. For these ensembles we can therefore lower bound the block-wise ML decoding threshold. Two examples are presented, namely, linear random parity-check codes and fixed-rate Raptor codes with linear random precoders. While for the former a full analytical solution is possible, for the latter we can lower bound the ML decoding threshold on the erasure channel by simply solving a 2 x 2 system of nonlinear equations.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا