ترغب بنشر مسار تعليمي؟ اضغط هنا

Limits of the Kucera-Gacs coding method

59   0   0.0 ( 0 )
 نشر من قبل George Barmpalias Dr
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Every real is computable from a Martin-Loef random real. This well known result in algorithmic randomness was proved by Kucera and Gacs. In this survey article we discuss various approaches to the problem of coding an arbitrary real into a Martin-Loef random real,and also describe new results concerning optimal methods of coding. We start with a simple presentation of the original methods of Kucera and Gacs and then rigorously demonstrate their limitations in terms of the size of the redundancy in the codes that they produce. Armed with a deeper understanding of these methods, we then proceed to motivate and illustrate aspects of the new coding method that was recently introduced by Barmpalias and Lewis-Pye and which achieves optimal logarithmic redundancy, an exponential improvement over the original redundancy bounds.



قيم البحث

اقرأ أيضاً

95 - Natasha Dobrinen 2019
This article discusses some recent trends in Ramsey theory on infinite structures. Trees and their Ramsey theory have been vital to these investigations. The main ideas behind the authors recent method of trees with coding nodes are presented, showin g how they can be useful both for coding structures with forbidden configurations as well as those with none. Using forcing as a tool for finite searches has allowed the development of Ramsey theory on such trees, leading to solutions for finite big Ramsey degrees of Henson graphs as well as infinite dimensional Ramsey theory of copies of the Rado graph. Possible future directions for applications of these methods are discussed.
We realize the Jiang-Su algebra, all UHF algebras, and the hyperfinite II$_{1}$ factor as Fraisse limits of suitable classes of structures. Moreover by means of Fraisse theory we provide new examples of AF algebras with strong homogeneity properties. As a consequence of our analysis we deduce Ramsey-theoretic results about the class of full-matrix algebras.
There is a Turing computable embedding $Phi$ of directed graphs $A$ in undirected graphs. Moreover, there is a fixed tuple of formulas that give a uniform interpretation; i.e., for all directed graphs $A$, these formulas interpret $A$ in $Phi(G)$. It follows that A is Medvedev reducible to $Phi(A)$ uniformly; i.e., there is a fixed Turing operator that serves for all $A$. We observe that there is a graph $G$ that is not Medvedev reducible to any linear ordering. Hence, $G$ is not effectively interpreted in any linear ordering. Similarly, there is a graph that is not interpreted in any linear ordering using computable $Sigma_2$ formulas. Any graph can be interpreted in a linear ordering using computable $Sigma_3$ formulas. Friedman and Stanley gave a Turing computable embedding L of directed graphs in linear orderings. We show that there is no fixed tuple of $L_{omega_1,omega}$ formulas that, for all $G$, interpret the input graph $G$ in the output linear ordering $L(G)$. Harrison-Trainor and Montalban have also shown this, by a quite different proof.
It has been established that when the gradient coding problem is distributed among $n$ servers, the computation load (number of stored data partitions) of each worker is at least $s+1$ in order to resists $s$ stragglers. This scheme incurs a large ov erhead when the number of stragglers $s$ is large. In this paper, we focus on a new framework called emph{approximate gradient coding} to mitigate stragglers in distributed learning. We show that, to exactly recover the gradient with high probability, the computation load is lower bounded by $O(log(n)/log(n/s))$. We also propose a code that exactly matches such lower bound. We identify a fundamental three-fold tradeoff for any approximate gradient coding scheme $dgeq O(log(1/epsilon)/log(n/s))$, where $d$ is the computation load, $epsilon$ is the error of gradient. We give an explicit code construction based on random edge removal process that achieves the derived tradeoff. We implement our schemes and demonstrate the advantage of the approaches over the current fastest gradient coding strategies.
The general theory developed by Ben Yaacov for metric structures provides Fraisse limits which are approximately ultrahomogeneous. We show here that this result can be strengthened in the case of relational metric structures. We give an extra conditi on that guarantees exact ultrahomogenous limits. The condition is quite general. We apply it to stochastic processes, the class of diversities, and its subclass of $L_1$ diversities.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا