ترغب بنشر مسار تعليمي؟ اضغط هنا

We study the process of information dispersal in a network with communication errors and local error-correction. Specifically we consider a simple model where a single bit of information initially known to a single source is dispersed through the net work, and communication errors lead to differences in the agents opinions on this information. Naturally, such errors can very quickly make the communication completely unreliable, and in this work we study to what extent this unreliability can be mitigated by local error-correction, where nodes periodically correct their opinion based on the opinion of (some subset of) their neighbors. We analyze how the error spreads in the early stages of information dispersal by monitoring the average opinion, i.e., the fraction of agents that have the correct information among all nodes that hold an opinion at a given time. Our main results show that even with significant effort in error-correction, tiny amounts of noise can lead the average opinion to be nearly uncorrelated with the truth in early stages. We also propose some local methods to help agents gauge when the information they have has stabilized.
We consider the approximability of constraint satisfaction problems in the streaming setting. For every constraint satisfaction problem (CSP) on $n$ variables taking values in ${0,ldots,q-1}$, we prove that improving over the trivial approximability by a factor of $q$ requires $Omega(n)$ space even on instances with $O(n)$ constraints. We also identify a broad subclass of problems for which any improvement over the trivial approximability requires $Omega(n)$ space. The key technical core is an optimal, $q^{-(k-1)}$-inapproximability for the case where every constraint is given by a system of $k-1$ linear equations $bmod; q$ over $k$ variables. Prior to our work, no such hardness was known for an approximation factor less than $1/2$ for any CSP. Our work builds on and extends the work of Kapralov and Krachun (Proc. STOC 2019) who showed a linear lower bound on any non-trivial approximation of the max cut in graphs. This corresponds roughly to the case of Max $k$-LIN-$bmod; q$ with $k=q=2$. Each one of the extensions provides non-trivial technical challenges that we overcome in this work.
An ordering constraint satisfaction problem (OCSP) is given by a positive integer $k$ and a constraint predicate $Pi$ mapping permutations on ${1,ldots,k}$ to ${0,1}$. Given an instance of OCSP$(Pi)$ on $n$ variables and $m$ constraints, the goal is to find an ordering of the $n$ variables that maximizes the number of constraints that are satisfied, where a constraint specifies a sequence of $k$ distinct variables and the constraint is satisfied by an ordering on the $n$ variables if the ordering induced on the $k$ variables in the constraint satisfies $Pi$. OCSPs capture natural problems including Maximum acyclic subgraph (MAS) and Betweenness. In this work we consider the task of approximating the maximum number of satisfiable constraints in the (single-pass) streaming setting, where an instance is presented as a stream of constraints. We show that for every $Pi$, OCSP$(Pi)$ is approximation-resistant to $o(n)$-space streaming algorithms. This space bound is tight up to polylogarithmic factors. In the case of MAS our result shows that for every $epsilon>0$, MAS is not $1/2+epsilon$-approximable in $o(n)$ space. The previous best inapproximability result only ruled out a $3/4$-approximation in $o(sqrt n)$ space. Our results build on recent works of Chou, Golovnev, Sudan, Velingker, and Velusamy who show tight, linear-space inapproximability results for a broad class of (non-ordering) constraint satisfaction problems over arbitrary (finite) alphabets. We design a family of appropriate CSPs (one for every $q$) from any given OCSP, and apply their work to this family of CSPs. We show that the hard instances from this earlier work have a particular small-set expansion property. By exploiting this combinatorial property, in combination with the hardness results of the resulting families of CSPs, we give optimal inapproximability results for all OCSPs.
A constraint satisfaction problem (CSP), Max-CSP$({cal F})$, is specified by a finite set of constraints ${cal F} subseteq {[q]^k to {0,1}}$ for positive integers $q$ and $k$. An instance of the problem on $n$ variables is given by $m$ applications o f constraints from ${cal F}$ to subsequences of the $n$ variables, and the goal is to find an assignment to the variables that satisfies the maximum number of constraints. In the $(gamma,beta)$-approximation version of the problem for parameters $0 leq beta < gamma leq 1$, the goal is to distinguish instances where at least $gamma$ fraction of the constraints can be satisfied from instances where at most $beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of this problem in the context of streaming algorithms and give a dichotomy result in the dynamic setting, where constraints can be inserted or deleted. Specifically, for every family ${cal F}$ and every $beta < gamma$, we show that either the approximation problem is solvable with polylogarithmic space in the dynamic setting, or not solvable with $o(sqrt{n})$ space. We also establish tight inapproximability results for a broad subclass in the streaming insertion-only setting. Our work builds on, and significantly extends previous work by the authors who consider the special case of Boolean variables ($q=2$), singleton families ($|{cal F}| = 1$) and where constraints may be placed on variables or their negations. Our framework extends non-trivially the previous work allowing us to appeal to richer norm estimation algorithms to get our algorithmic results. For our negative results we introduce new variants of the communication problems studied in the previous work, build new reductions for these problems, and extend the technical parts of previous works.
A Boolean constraint satisfaction problem (CSP), Max-CSP$(f)$, is a maximization problem specified by a constraint $f:{-1,1}^kto{0,1}$. An instance of the problem consists of $m$ constraint applications on $n$ Boolean variables, where each constraint application applies the constraint to $k$ literals chosen from the $n$ variables and their negations. The goal is to compute the maximum number of constraints that can be satisfied by a Boolean assignment to the $n$~variables. In the $(gamma,beta)$-approximation version of the problem for parameters $gamma geq beta in [0,1]$, the goal is to distinguish instances where at least $gamma$ fraction of the constraints can be satisfied from instances where at most $beta$ fraction of the constraints can be satisfied. In this work we consider the approximability of Max-CSP$(f)$ in the (dynamic) streaming setting, where constraints are inserted (and may also be deleted in the dynamic setting) one at a time. We completely characterize the approximability of all Boolean CSPs in the dynamic streaming setting. Specifically, given $f$, $gamma$ and $beta$ we show that either (1) the $(gamma,beta)$-approximation version of Max-CSP$(f)$ has a probabilistic dynamic streaming algorithm using $O(log n)$ space, or (2) for every $varepsilon > 0$ the $(gamma-varepsilon,beta+varepsilon)$-approximation version of Max-CSP$(f)$ requires $Omega(sqrt{n})$ space for probabilistic dynamic streaming algorithms. We also extend previously known results in the insertion-only setting to a wide variety of cases, and in particular the case of $k=2$ where we get a dichotomy and the case when the satisfying assignments of $f$ support a distribution on ${-1,1}^k$ with uniform marginals.
Wooley ({em J. Number Theory}, 1996) gave an elementary proof of a Bezout like theorem allowing one to count the number of isolated integer roots of a system of polynomial equations modulo some prime power. In this article, we adapt the proof to a slightly different setting. Specifically, we consider polynomials with coefficients from a polynomial ring $mathbb{F}[t]$ for an arbitrary field $mathbb{F}$ and give an upper bound on the number of isolated roots modulo $t^s$ for an arbitrary positive integer $s$. In particular, using $s=1$, we can bound the number of isolated roots of a system of polynomials over an arbitrary field $mathbb{F}$.
136 - Noah Singer , Madhu Sudan 2021
We study the log-rank conjecture from the perspective of point-hyperplane incidence geometry. We formulate the following conjecture: Given a point set in $mathbb{R}^d$ that is covered by constant-sized sets of parallel hyperplanes, there exists an af fine subspace that accounts for a large (i.e., $2^{-{operatorname{polylog}(d)}}$) fraction of the incidences. Alternatively, our conjecture may be interpreted linear-algebraically as follows: Any rank-$d$ matrix containing at most $O(1)$ distinct entries in each column contains a submatrix of fractional size $2^{-{operatorname{polylog}(d)}}$, in which each column contains one distinct entry. We prove that our conjecture is equivalent to the log-rank conjecture. Motivated by the connections above, we revisit well-studied questions in point-hyperplane incidence geometry without structural assumptions (i.e., the existence of partitions). We give an elementary argument for the existence of complete bipartite subgraphs of density $Omega(epsilon^{2d}/d)$ in any $d$-dimensional configuration with incidence density $epsilon$. We also improve an upper-bound construction of Apfelbaum and Sharir (SIAM J. Discrete Math. 07), yielding a configuration whose complete bipartite subgraphs are exponentially small and whose incidence density is $Omega(1/sqrt d)$. Finally, we discuss various constructions (due to others) which yield configurations with incidence density $Omega(1)$ and bipartite subgraph density $2^{-Omega(sqrt d)}$. Our framework and results may help shed light on the difficulty of improving Lovetts $tilde{O}(sqrt{operatorname{rank}(f)})$ bound (J. ACM 16) for the log-rank conjecture; in particular, any improvement on this bound would imply the first bipartite subgraph size bounds for parallel $3$-partitioned configurations which beat our generic bounds for unstructured configurations.
Trace reconstruction considers the task of recovering an unknown string $x in {0,1}^n$ given a number of independent traces, i.e., subsequences of $x$ obtained by randomly and independently deleting every symbol of $x$ with some probability $p$. The information-theoretic limit of the number of traces needed to recover a string of length $n$ are still unknown. This limit is essentially the same as the number of traces needed to determine, given strings $x$ and $y$ and traces of one of them, which string is the source. The most studied class of algorithms for the worst-case version of the problem are mean-based algorithms. These are a restricted class of distinguishers that only use the mean value of each coordinate on the given samples. In this work we study limitations of mean-based algorithms on strings at small Hamming or edit distance. We show on the one hand that distinguishing strings that are nearby in Hamming distance is easy for such distinguishers. On the other hand, we show that distinguishing strings that are nearby in edit distance is hard for mean-based algorithms. Along the way we also describe a connection to the famous Prouhet-Tarry-Escott (PTE) problem, which shows a barrier to finding explicit hard-to-distinguish strings: namely such strings would imply explicit short solutions to the PTE problem, a well-known difficult problem in number theory. Our techniques rely on complex analysis arguments that involve careful trigonometric estimates, and algebraic techniques that include applications of Descartes rule of signs for polynomials over the reals.
The well-known DeMillo-Lipton-Schwartz-Zippel lemma says that $n$-variate polynomials of total degree at most $d$ over grids, i.e. sets of the form $A_1 times A_2 times cdots times A_n$, form error-correcting codes (of distance at least $2^{-d}$ prov ided $min_i{|A_i|}geq 2$). In this work we explore their local decodability and (tolerant) local testability. While these aspects have been studied extensively when $A_1 = cdots = A_n = mathbb{F}_q$ are the same finite field, the setting when $A_i$s are not the full field does not seem to have been explored before. In this work we focus on the case $A_i = {0,1}$ for every $i$. We show that for every field (finite or otherwise) there is a test whose query complexity depends only on the degree (and not on the number of variables). In contrast we show that decodability is possible over fields of positive characteristic (with query complexity growing with the degree of the polynomial and the characteristic), but not over the reals, where the query complexity must grow with $n$. As a consequence we get a natural example of a code (one with a transitive group of symmetries) that is locally testable but not locally decodable. Classical results on local decoding and testing of polynomials have relied on the 2-transitive symmetries of the space of low-degree polynomials (under affine transformations). Grids do not possess this symmetry: So we introduce some new techniques to overcome this handicap and in particular use the hypercontractivity of the (constant weight) noise operator on the Hamming cube.
Any physical channel of communication offers two potential reasons why its capacity (the number of bits it can transmit in a unit of time) might be unbounded: (1) Infinitely many choices of signal strength at any given instant of time, and (2) Infini tely many instances of time at which signals may be sent. However channel noise cancels out the potential unboundedness of the first aspect, leaving typical channels with only a finite capacity per instant of time. The latter source of infinity seems less studied. A potential source of unreliability that might restrict the capacity also from the second aspect is delay: Signals transmitted by the sender at a given point of time may not be received with a predictable delay at the receiving end. Here we examine this source of uncertainty by considering a simple discrete model of delay errors. In our model the communicating parties get to subdivide time as microscopically finely as they wish, but still have to cope with communication delays that are macroscopic and variable. The continuous process becomes the limit of our process as the time subdivision becomes infinitesimal. We taxonomize this class of communication channels based on whether the delays and noise are stochastic or adversarial; and based on how much information each aspect has about the other when introducing its errors. We analyze the limits of such channels and reach somewhat surprising conclusions: The capacity of a physical channel is finitely bounded only if at least one of the two sources of error (signal noise or delay noise) is adversarial. In particular the capacity is finitely bounded only if the delay is adversarial, or the noise is adversarial and acts with knowledge of the stochastic delay. If both error sources are stochastic, or if the noise is adversarial and independent of the stochastic delay, then the capacity of the associated physical channel is infinite.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا