ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Information Dimension of Stochastic Processes

145   0   0.0 ( 0 )
 نشر من قبل Bernhard C. Geiger
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In 1959, Renyi proposed the information dimension and the $d$-dimensional entropy to measure the information content of general random variables. This paper proposes a generalization of information dimension to stochastic processes by defining the information dimension rate as the entropy rate of the uniformly-quantized stochastic process divided by minus the logarithm of the quantizer step size $1/m$ in the limit as $mtoinfty$. It is demonstrated that the information dimension rate coincides with the rate-distortion dimension, defined as twice the rate-distortion function $R(D)$ of the stochastic process divided by $-log(D)$ in the limit as $Ddownarrow 0$. It is further shown that, among all multivariate stationary processes with a given (matrix-valued) spectral distribution function (SDF), the Gaussian process has the largest information dimension rate, and that the information dimension rate of multivariate stationary Gaussian processes is given by the average rank of the derivative of the SDF. The presented results reveal that the fundamental limits of almost zero-distortion recovery via compressible signal pursuit and almost lossless analog compression are different in general.

قيم البحث

اقرأ أيضاً

The authors have recently defined the Renyi information dimension rate $d({X_t})$ of a stationary stochastic process ${X_t,,tinmathbb{Z}}$ as the entropy rate of the uniformly-quantized process divided by minus the logarithm of the quantizer step siz e $1/m$ in the limit as $mtoinfty$ (B. Geiger and T. Koch, On the information dimension rate of stochastic processes, in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Aachen, Germany, June 2017). For Gaussian processes with a given spectral distribution function $F_X$, they showed that the information dimension rate equals the Lebesgue measure of the set of harmonics where the derivative of $F_X$ is positive. This paper extends this result to multivariate Gaussian processes with a given matrix-valued spectral distribution function $F_{mathbf{X}}$. It is demonstrated that the information dimension rate equals the average rank of the derivative of $F_{mathbf{X}}$. As a side result, it is shown that the scale and translation invariance of information dimension carries over from random variables to stochastic processes.
The rate-distortion dimension (RDD) of an analog stationary process is studied as a measure of complexity that captures the amount of information contained in the process. It is shown that the RDD of a process, defined as two times the asymptotic rat io of its rate-distortion function $R(D)$ to $log {1over D}$ as the distortion $D$ approaches zero, is equal to its information dimension (ID). This generalizes an earlier result by Kawabata and Dembo and provides an operational approach to evaluate the ID of a process, which previously was shown to be closely related to the effective dimension of the underlying process and also to the fundamental limits of compressed sensing. The relation between RDD and ID is illustrated for a piecewise constant process.
In an effort to develop the foundations for a non-stochastic theory of information, the notion of $delta$-mutual information between uncertain variables is introduced as a generalization of Nairs non-stochastic information functional. Several propert ies of this new quantity are illustrated, and used to prove a channel coding theorem in a non-stochastic setting. Namely, it is shown that the largest $delta$-mutual information between received and transmitted codewords over $epsilon$-noise channels equals the $(epsilon, delta)$-capacity. This notion of capacity generalizes the Kolmogorov $epsilon$-capacity to packing sets of overlap at most $delta$, and is a variation of a previous definition proposed by one of the authors. Results are then extended to more general noise models, and to non-stochastic, memoryless, stationary channels. Finally, sufficient conditions are established for the factorization of the $delta$-mutual information and to obtain a single letter capacity expression. Compared to previous non-stochastic approaches, the presented theory admits the possibility of decoding errors as in Shannons probabilistic setting, while retaining a worst-case, non-stochastic character.
Given a probability measure $mu$ over ${mathbb R}^n$, it is often useful to approximate it by the convex combination of a small number of probability measures, such that each component is close to a product measure. Recently, Ronen Eldan used a stoch astic localization argument to prove a general decomposition result of this type. In Eldans theorem, the `number of components is characterized by the entropy of the mixture, and `closeness to product is characterized by the covariance matrix of each component. We present an elementary proof of Eldans theorem which makes use of an information theory (or estimation theory) interpretation. The proof is analogous to the one of an earlier decomposition result known as the `pinning lemma.
We investigate the asymptotic rates of length-$n$ binary codes with VC-dimension at most $dn$ and minimum distance at least $delta n$. Two upper bounds are obtained, one as a simple corollary of a result by Haussler and the other via a shortening app roach combining Sauer-Shelah lemma and the linear programming bound. Two lower bounds are given using Gilbert-Varshamov type arguments over constant-weight and Markov-type sets.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا