ترغب بنشر مسار تعليمي؟ اضغط هنا

Novel Lower Bounds on the Entropy Rate of Binary Hidden Markov Processes

98   0   0.0 ( 0 )
 نشر من قبل Or Ordentlich
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Or Ordentlich




اسأل ChatGPT حول البحث

Recently, Samorodnitsky proved a strengthened version of Mrs. Gerbers Lemma, where the output entropy of a binary symmetric channel is bounded in terms of the average entropy of the input projected on a random subset of coordinates. Here, this result is applied for deriving novel lower bounds on the entropy rate of binary hidden Markov processes. For symmetric underlying Markov processes, our bound improves upon the best known bound in the very noisy regime. The nonsymmetric case is also considered, and explicit bounds are derived for Markov processes that satisfy the $(1,infty)$-RLL constraint.



قيم البحث

اقرأ أيضاً

453 - Shuangping Chen , Jun Li , Mi Zhou 2011
In the paper, the approximate sequence for entropy of some binary hidden Markov models has been found to have two bound sequences, the low bound sequence and the upper bound sequence. The error bias of the approximate sequence is bound by a geometric sequence with a scale factor less than 1 which decreases quickly to zero. It helps to understand the convergence of entropy rate of generic hidden Markov models, and it provides a theoretical base for estimating the entropy rate of some hidden Markov models at any accuracy.
While two hidden Markov process (HMP) resp. quantum random walk (QRW) parametrizations can differ from one another, the stochastic processes arising from them can be equivalent. Here a polynomial-time algorithm is presented which can determine equiva lence of two HMP parametrizations $cM_1,cM_2$ resp. two QRW parametrizations $cQ_1,cQ_2$ in time $O(|S|max(N_1,N_2)^{4})$, where $N_1,N_2$ are the number of hidden states in $cM_1,cM_2$ resp. the dimension of the state spaces associated with $cQ_1,cQ_2$, and $S$ is the set of output symbols. Previously available algorithms for testing equivalence of HMPs were exponential in the number of hidden states. In case of QRWs, algorithms for testing equivalence had not yet been presented. The core subroutines of this algorithm can also be used to efficiently test hidden Markov processes and quantum random walks for ergodicity.
In this work, novel upper and lower bounds for the capacity of channels with arbitrary constraints on the support of the channel input symbols are derived. As an immediate practical application, the case of multiple-input multiple-output channels wit h amplitude constraints is considered. The bounds are shown to be within a constant gap if the channel matrix is invertible and are tight in the high amplitude regime for arbitrary channel matrices. Moreover, in the high amplitude regime, it is shown that the capacity scales linearly with the minimum between the number of transmit and receive antennas, similarly to the case of average power-constrained inputs.
98 - Ray Li , Mary Wootters 2021
Batch codes are a useful notion of locality for error correcting codes, originally introduced in the context of distributed storage and cryptography. Many constructions of batch codes have been given, but few lower bound (limitation) results are know n, leaving gaps between the best known constructions and best known lower bounds. Towards determining the optimal redundancy of batch codes, we prove a new lower bound on the redundancy of batch codes. Specifically, we study (primitive, multiset) linear batch codes that systematically encode $n$ information symbols into $N$ codeword symbols, with the requirement that any multiset of $k$ symbol requests can be obtained in disjoint ways. We show that such batch codes need $Omega(sqrt{Nk})$ symbols of redundancy, improving on the previous best lower bounds of $Omega(sqrt{N}+k)$ at all $k=n^varepsilon$ with $varepsilonin(0,1)$. Our proof follows from analyzing the dimension of the order-$O(k)$ tensor of the batch codes dual code.
This paper provides fundamental limits on the sample complexity of estimating dictionaries for tensor data. The specific focus of this work is on $K$th-order tensor data and the case where the underlying dictionary can be expressed in terms of $K$ sm aller dictionaries. It is assumed the data are generated by linear combinations of these structured dictionary atoms and observed through white Gaussian noise. This work first provides a general lower bound on the minimax risk of dictionary learning for such tensor data and then adapts the proof techniques for specialized results in the case of sparse and sparse-Gaussian linear combinations. The results suggest the sample complexity of dictionary learning for tensor data can be significantly lower than that for unstructured data: for unstructured data it scales linearly with the product of the dictionary dimensions, whereas for tensor-structured data the bound scales linearly with the sum of the product of the dimensions of the (smaller) component dictionaries. A partial converse is provided for the case of 2nd-order tensor data to show that the bounds in this paper can be tight. This involves developing an algorithm for learning highly-structured dictionaries from noisy tensor data. Finally, numerical experiments highlight the advantages associated with explicitly accounting for tensor data structure during dictionary learning.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا