ترغب بنشر مسار تعليمي؟ اضغط هنا

Santha-Vazirani sources, deterministic condensers and very strong extractors

93   0   0.0 ( 0 )
 نشر من قبل Dmitry Gavinsky
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The notion of semi-random sources, also known as Santha-Vazirani (SV) sources, stands for a sequence of n bits, where the dependence of the ith bit on the previous i-1 bits is limited for every $iin[n]$. If the dependence of the ith bit on the remaining n-1 bits is limited, then this is a strong SV-source. Even the strong SV-sources are known not to admit (universal) deterministic extractors, but they have seeded extractors, as their min-entropy is $Omega(n)$. It is intuitively obvious that strong SV-sources are more than just high-min-entropy sources, and this work explores the intuition. Deterministic condensers are known not to exist for general high-min-entropy sources, and we construct for any constants $epsilon, delta in (0,1)$ a deterministic condenser that maps n bits coming from a strong SV-source with bias at most $delta$ to $Omega(n)$ bits of min-entropy rate at least $1-epsilon$. In conclusion we observe that deterministic condensers are closely related to very strong extractors - a proposed strengthening of the notion of strong (seeded) extractors: in particular, our constructions can be viewed as very strong extractors for the family of strong Santha-Vazirani distributions. The notion of very strong extractors requires that the output remains unpredictable even to someone who knows not only the seed value (as in the case of strong extractors), but also the extractors outputs corresponding to the same input value with each of the preceding seed values (say, under the lexicographic ordering). Very strong extractors closely resemble the original notion of SV-sources, except that the bits must satisfy the unpredictability requirement only on average.



قيم البحث

اقرأ أيضاً

160 - Salman Beigi , Omid Etesami , 2014
A Santha-Vazirani (SV) source is a sequence of random bits where the conditional distribution of each bit, given the previous bits, can be partially controlled by an adversary. Santha and Vazirani show that deterministic randomness extraction from th ese sources is impossible. In this paper, we study the generalization of SV sources for non-binary sequences. We show that unlike the binary case, deterministic randomness extraction in the generalized case is sometimes possible. We present a necessary condition and a sufficient condition for the possibility of deterministic randomness extraction. These two conditions coincide in non-degenerate cases. Next, we turn to a distributed setting. In this setting the SV source consists of a random sequence of pairs $(a_1, b_1), (a_2, b_2), ldots$ distributed between two parties, where the first party receives $a_i$s and the second one receives $b_i$s. The goal of the two parties is to extract common randomness without communication. Using the notion of maximal correlation, we prove a necessary condition and a sufficient condition for the possibility of common randomness extraction from these sources. Based on these two conditions, the problem of common randomness extraction essentially reduces to the problem of randomness extraction from (non-distributed) SV sources. This result generalizes results of Gacs and Korner, and Witsenhausen about common randomness extraction from i.i.d. sources to adversarial sources.
We study the problem of extracting random bits from weak sources that are sampled by algorithms with limited memory. This model of small-space sources was introduced by Kamp, Rao, Vadhan and Zuckerman (STOC06), and falls into a line of research initi ated by Trevisan and Vadhan (FOCS00) on extracting randomness from weak sources that are sampled by computationally bounded algorithms. Our main results are the following. 1. We obtain near-optimal extractors for small-space sources in the polynomial error regime. For space $s$ sources over $n$ bits, our extractors require just $kgeq scdot$polylog$(n)$ entropy. This is an exponential improvement over the previous best result, which required $kgeq s^{1.1}cdot2^{log^{0.51} n}$ (Chattopadhyay and Li, STOC16). 2. We obtain improved extractors for small-space sources in the negligible error regime. For space $s$ sources over $n$ bits, our extractors require entropy $kgeq n^{1/2+delta}cdot s^{1/2-delta}$, whereas the previous best result required $kgeq n^{2/3+delta}cdot s^{1/3-delta}$ (Chattopadhyay, Goodman, Goyal and Li, STOC20). To obtain our first result, the key ingredient is a new reduction from small-space sources to affine sources, allowing us to simply apply a good affine extractor. To obtain our second result, we must develop some new machinery, since we do not have low-error affine extractors that work for low entropy. Our main tool is a significantly improved extractor for adversarial sources, which is built via a simple framework that makes novel use of a certain kind of leakage-resilient extractors (known as cylinder intersection extractors), by combining them with a general type of extremal designs. Our key ingredient is the first derandomization of these designs, which we obtain using new connections to coding theory and additive combinatorics.
We describe a construction of explicit affine extractors over large finite fields with exponentially small error and linear output length. Our construction relies on a deep theorem of Deligne giving tight estimates for exponential sums over smooth varieties in high dimensions.
A pseudo-deterministic algorithm is a (randomized) algorithm which, when run multiple times on the same input, with high probability outputs the same result on all executions. Classic streaming algorithms, such as those for finding heavy hitters, app roximate counting, $ell_2$ approximation, finding a nonzero entry in a vector (for turnstile algorithms) are not pseudo-deterministic. For example, in the instance of finding a nonzero entry in a vector, for any known low-space algorithm $A$, there exists a stream $x$ so that running $A$ twice on $x$ (using different randomness) would with high probability result in two different entries as the output. In this work, we study whether it is inherent that these algorithms output different values on different executions. That is, we ask whether these problems have low-memory pseudo-deterministic algorithms. For instance, we show that there is no low-memory pseudo-deterministic algorithm for finding a nonzero entry in a vector (given in a turnstile fashion), and also that there is no low-dimensional pseudo-deterministic sketching algorithm for $ell_2$ norm estimation. We also exhibit problems which do have low memory pseudo-deterministic algorithms but no low memory deterministic algorithm, such as outputting a nonzero row of a matrix, or outputting a basis for the row-span of a matrix. We also investigate multi-pseudo-deterministic algorithms: algorithms which with high probability output one of a few options. We show the first lower bounds for such algorithms. This implies that there are streaming problems such that every low space algorithm for the problem must have inputs where there are many valid outputs, all with a significant probability of being outputted.
Given a DNF formula on n variables, the two natural size measures are the number of terms or size s(f), and the maximum width of a term w(f). It is folklore that short DNF formulas can be made narrow. We prove a converse, showing that narrow formulas can be sparsified. More precisely, any width w DNF irrespective of its size can be $epsilon$-approximated by a width $w$ DNF with at most $(wlog(1/epsilon))^{O(w)}$ terms. We combine our sparsification result with the work of Luby and Velikovic to give a faster deterministic algorithm for approximately counting the number of satisfying solutions to a DNF. Given a formula on n variables with poly(n) terms, we give a deterministic $n^{tilde{O}(log log(n))}$ time algorithm that computes an additive $epsilon$ approximation to the fraction of satisfying assignments of f for $epsilon = 1/poly(log n)$. The previous best result due to Luby and Velickovic from nearly two decades ago had a run-time of $n^{exp(O(sqrt{log log n}))}$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا