ترغب بنشر مسار تعليمي؟ اضغط هنا

On the computational tractability of statistical estimation on amenable graphs

106   0   0.0 ( 0 )
 نشر من قبل Ahmed El Alaoui
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of estimating a vector of discrete variables $(theta_1,cdots,theta_n)$, based on noisy observations $Y_{uv}$ of the pairs $(theta_u,theta_v)$ on the edges of a graph $G=([n],E)$. This setting comprises a broad family of statistical estimation problems, including group synchronization on graphs, community detection, and low-rank matrix estimation. A large body of theoretical work has established sharp thresholds for weak and exact recovery, and sharp characterizations of the optimal reconstruction accuracy in such models, focusing however on the special case of Erdos--Renyi-type random graphs. The single most important finding of this line of work is the ubiquity of an information-computation gap. Namely, for many models of interest, a large gap is found between the optimal accuracy achievable by any statistical method, and the optimal accuracy achieved by known polynomial-time algorithms. Moreover, this gap is generally believed to be robust to small amounts of additional side information revealed about the $theta_i$s. How does the structure of the graph $G$ affect this picture? Is the information-computation gap a general phenomenon or does it only apply to specific families of graphs? We prove that the picture is dramatically different for graph sequences converging to amenable graphs (including, for instance, $d$-dimensional grids). We consider a model in which an arbitrarily small fraction of the vertex labels is revealed, and show that a linear-time local algorithm can achieve reconstruction accuracy that is arbitrarily close to the information-theoretic optimum. We contrast this to the case of random graphs. Indeed, focusing on group synchronization on random regular graphs, we prove that the information-computation gap still persists even when a small amount of side information is revealed.



قيم البحث

اقرأ أيضاً

In this note we study some properties of infinite percolation clusters on non-amenable graphs. In particular, we study the percolative properties of the complement of infinite percolation clusters. An approach based on mass-transport is adapted to sh ow that for a large class of non-amenable graphs, the graph obtained by removing each site contained in an infinite percolation cluster has critical percolation threshold which can be arbitrarily close to the critical threshold for the original graph, almost surely, as p approaches p_c. Closely related is the self-destructive percolation process, introduced by J. van den Berg and R. Brouwer, for which we prove that an infinite cluster emerges for any small reinforcement.
In this work we study the fundamental limits of approximate recovery in the context of group testing. One of the most well-known, theoretically optimal, and easy to implement testing procedures is the non-adaptive Bernoulli group testing problem, whe re all tests are conducted in parallel, and each item is chosen to be part of any certain test independently with some fixed probability. In this setting, there is an observed gap between the number of tests above which recovery is information theoretically (IT) possible, and the number of tests required by the currently best known efficient algorithms to succeed. Often times such gaps are explained by a phase transition in the landscape of the solution space of the problem (an Overlap Gap Property phase transition). In this paper we seek to understand whether such a phenomenon takes place for Bernoulli group testing as well. Our main contributions are the following: (1) We provide first moment evidence that, perhaps surprisingly, such a phase transition does not take place throughout the regime for which recovery is IT possible. This fact suggests that the model is in fact amenable to local search algorithms ; (2) we prove the complete absence of bad local minima for a part of the hard regime, a fact which implies an improvement over known theoretical results on the performance of efficient algorithms for approximate recovery without false-negatives, and finally (3) we present extensive simulations that strongly suggest that a very simple local algorithm known as Glauber Dynamics does indeed succeed, and can be used to efficiently implement the well-known (theoretically optimal) Smallest Satisfying Set (SSS) estimator.
Large graphs abound in machine learning, data mining, and several related areas. A useful step towards analyzing such graphs is that of obtaining certain summary statistics - e.g., or the expected length of a shortest path between two nodes, or the e xpected weight of a minimum spanning tree of the graph, etc. These statistics provide insight into the structure of a graph, and they can help predict global properties of a graph. Motivated thus, we propose to study statistical properties of structured subgraphs (of a given graph), in particular, to estimate the expected objective function value of a combinatorial optimization problem over these subgraphs. The general task is very difficult, if not unsolvable; so for concreteness we describe a more specific statistical estimation problem based on spanning trees. We hope that our position paper encourages others to also study other types of graphical structures for which one can prove nontrivial statistical estimates.
For $Delta ge 5$ and $q$ large as a function of $Delta$, we give a detailed picture of the phase transition of the random cluster model on random $Delta$-regular graphs. In particular, we determine the limiting distribution of the weights of the orde red and disordered phases at criticality and prove exponential decay of correlations and central limit theorems away from criticality. Our techniques are based on using polymer models and the cluster expansion to control deviations from the ordered and disordered ground states. These techniques also yield efficient approximate counting and sampling algorithms for the Potts and random cluster models on random $Delta$-regular graphs at all temperatures when $q$ is large. This includes the critical temperature at which it is known the Glauber and Swendsen-Wang dynamics for the Potts model mix slowly. We further prove new slow-mixing results for Markov chains, most notably that the Swendsen-Wang dynamics mix exponentially slowly throughout an open interval containing the critical temperature. This was previously only known at the critical temperature. Many of our results apply more generally to $Delta$-regular graphs satisfying a small-set expansion condition.
Redistricting is the problem of dividing a state into a number $k$ of regions, called districts. Voters in each district elect a representative. The primary criteria are: each district is connected, district populations are equal (or nearly equal), a nd districts are compact. There are multiple competing definitions of compactness, usually minimizing some quantity. One measure that has been recently promoted by Duchin and others is number of cut edges. In redistricting, one is given atomic regions out of which each district must be built. The populations of the atomic regions are given. Consider the graph with one vertex per atomic region (with weight equal to the regions population) and an edge between atomic regions that share a boundary. A districting plan is a partition of vertices into $k$ parts, each connnected, of nearly equal weight. The districts are considered compact to the extent that the plan minimizes the number of edges crossing between different parts. Consider two problems: find the most compact districting plan, and sample districting plans under a compactness constraint uniformly at random. Both problems are NP-hard so we restrict the input graph to have branchwidth at most $w$. (A planar graphs branchwidth is bounded by its diameter.) If both $k$ and $w$ are bounded by constants, the problems are solvable in polynomial time. Assume vertices have weight~1. One would like algorithms whose running times are of the form $O(f(k,w) n^c)$ for some constant $c$ independent of $k$ and $w$, in which case the problems are said to be fixed-parameter tractable with respect to $k$ and $w$). We show that, under a complexity-theoretic assumption, no such algorithms exist. However, we do give algorithms with running time $O(c^wn^{k+1})$. Thus if the diameter of the graph is moderately small and the number of districts is very small, our algorithm is useable.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا