ترغب بنشر مسار تعليمي؟ اضغط هنا

On Time-Bounded Incompressibility of Compressible Strings and Sequences

231   0   0.0 ( 0 )
 نشر من قبل Paul Vitanyi
 تاريخ النشر 2009
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

For every total recursive time bound $t$, a constant fraction of all compressible (low Kolmogorov complexity) strings is $t$-bounded incompressible (high time-bounded Kolmogorov complexity); there are uncountably many infinite sequences of which every initial segment of length $n$ is compressible to $log n$ yet $t$-bounded incompressible below ${1/4}n - log n$; and there are countable infinitely many recursive infinite sequence of which every initial segment is similarly $t$-bounded incompressible. These results are related to, but different from, Barzdinss lemma.



قيم البحث

اقرأ أيضاً

193 - Eric Allender 2014
This paper is motivated by a conjecture that BPP can be characterized in terms of polynomial-time nonadaptive reductions to the set of Kolmogorov-random strings. In this paper we show that an approach laid out in [Allender et al] to settle this conje cture cannot succeed without significant alteration, but that it does bear fruit if we consider time-bounded Kolmogorov complexity instead. We show that if a set A is reducible in polynomial time to the set of time-t-bounded Kolmogorov random strings (for all large enough time bounds t), then A is in P/poly, and that if in addition such a reduction exists for any universal Turing machine one uses in the definition of Kolmogorov complexity, then A is in PSPACE.
Information-theoretic methods have proven to be a very powerful tool in communication complexity, in particular giving an elegant proof of the linear lower bound for the two-party disjointness function, and tight lower bounds on disjointness in the m ulti-party number-in-the-hand (NIH) model. In this paper, we study the applicability of information theoretic methods to the multi-party number-on-the-forehead model (NOF), where determining the complexity of disjointness remains an important open problem. There are two basic parts to the NIH disjointness lower bound: a direct sum theorem and a lower bound on the one-bit AND function using a beautiful connection between Hellinger distance and protocols revealed by Bar-Yossef, Jayram, Kumar and Sivakumar [BYJKS04]. Inspired by this connection, we introduce the notion of Hellinger volume. We show that it lower bounds the information cost of multi-party NOF protocols and provide a small toolbox that allows one to manipulate several Hellinger volume terms and lower bound a Hellinger volume when the distributions involved satisfy certain conditions. In doing so, we prove a new upper bound on the difference between the arithmetic mean and the geometric mean in terms of relative entropy. We then apply these new tools to obtain a lower bound on the informational complexity of the AND_k function in the NOF setting. Finally, we discuss the difficulties of proving a direct sum theorem for information cost in the NOF model.
The design of methods for inference from time sequences has traditionally relied on statistical models that describe the relation between a latent desired sequence and the observed one. A broad family of model-based algorithms have been derived to ca rry out inference at controllable complexity using recursive computations over the factor graph representing the underlying distribution. An alternative model-agnostic approach utilizes machine learning (ML) methods. Here we propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences. In the proposed approach, neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence, rather than the complete inference task. By exploiting stationary properties of this distribution, the resulting approach can be applied to sequences of varying temporal duration. Learned factor graph can be realized using compact neural networks that are trainable using small training sets, or alternatively, be used to improve upon existing deep inference systems. We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data, and can be applied to sequences of different lengths. Our experimental results demonstrate the ability of the proposed learned factor graphs to learn to carry out accurate inference from small training sets for sleep stage detection using the Sleep-EDF dataset, as well as for symbol detection in digital communications with unknown channels.
Two parties wish to carry out certain distributed computational tasks, and they are given access to a source of correlated random bits. It allows the parties to act in a correlated manner, which can be quite useful. But what happens if the shared ran domness is not perfect? In this work, we initiate the study of the power of different sources of shared randomness in communication complexity. This is done in the setting of simultaneous message passing (SMP) model of communication complexity, which is one of the most suitable models for studying the resource of shared randomness. Toward characterising the power of various sources of shared randomness, we introduce a measure for the quality of a source - we call it collision complexity. Our results show that the collision complexity tightly characterises the power of a (shared) randomness resource in the SMP model. Of independent interest is our demonstration that even the weakest sources of shared randomness can in some cases increase the power of SMP substantially: the equality function can be solved very efficiently with virtually any nontrivial shared randomness.
196 - Anurag Anshu , Debbie Leung , 2019
In blind compression of quantum states, a sender Alice is given a specimen of a quantum state $rho$ drawn from a known ensemble (but without knowing what $rho$ is), and she transmits sufficient quantum data to a receiver Bob so that he can decode a n ear perfect specimen of $rho$. For many such states drawn iid from the ensemble, the asymptotically achievable rate is the number of qubits required to be transmitted per state. The Holevo information is a lower bound for the achievable rate, and is attained for pure state ensembles, or in the related scenario of entanglement-assisted visible compression of mixed states wherein Alice knows what state is drawn. In this paper, we prove a general, robust, lower bound on the achievable rate for ensembles of classical states, which holds even in the least demanding setting when Alice and Bob share free entanglement and a constant per-copy error is allowed. We apply the bound to a specific ensemble of only two states and prove a near-maximal separation between the best achievable rate and the Holevo information for constant error. Since the states are classical, the observed incompressibility is not fundamentally quantum mechanical. We lower bound the difference between the achievable rate and the Holevo information in terms of quantitative limitations to clone the specimen or to distinguish the two classical states.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا