ترغب بنشر مسار تعليمي؟ اضغط هنا

Efficient algorithms for the Potts model on small-set expanders

143   0   0.0 ( 0 )
 نشر من قبل Charles Carlson
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop an approximation algorithm for the partition function of the ferromagnetic Potts model on graphs with a small-set expansion condition, and as a step in the argument we give a graph partitioning algorithm with expansion and minimum degree conditions on the subgraphs induced by each part. These results extend previous work of Jenssen, Keevash, and Perkins (2019) on the Potts model and related problems in expander graphs, and of Oveis Gharan and Trevisan (2014) on partitioning into expanders.



قيم البحث

اقرأ أيضاً

We establish a polynomial-time approximation algorithm for partition functions of quantum spin models at high temperature. Our algorithm is based on the quantum cluster expansion of Netov{c}ny and Redig and the cluster expansion approach to designing algorithms due to Helmuth, Perkins, and Regts. Similar results have previously been obtained by related methods, and our main contribution is a simple and slightly sharper analysis for the case of pairwise interactions on bounded-degree graphs.
In this paper we study expander graphs and their minors. Specifically, we attempt to answer the following question: what is the largest function $f(n,alpha,d)$, such that every $n$-vertex $alpha$-expander with maximum vertex degree at most $d$ contai ns {bf every} graph $H$ with at most $f(n,alpha,d)$ edges and vertices as a minor? Our main result is that there is some universal constant $c$, such that $f(n,alpha,d)geq frac{n}{clog n}cdot left(frac{alpha}{d}right )^c$. This bound achieves a tight dependence on $n$: it is well known that there are bounded-degree $n$-vertex expanders, that do not contain any grid with $Omega(n/log n)$ vertices and edges as a minor. The best previous result showed that $f(n,alpha,d) geq Omega(n/log^{kappa}n)$, where $kappa$ depends on both $alpha$ and $d$. Additionally, we provide a randomized algorithm, that, given an $n$-vertex $alpha$-expander with maximum vertex degree at most $d$, and another graph $H$ containing at most $frac{n}{clog n}cdot left(frac{alpha}{d}right )^c$ vertices and edges, with high probability finds a model of $H$ in $G$, in time poly$(n)cdot (d/alpha)^{Oleft( log(d/alpha) right)}$. We note that similar but stronger results were independently obtained by Krivelevich and Nenadov: they show that $f(n,alpha,d)=Omega left(frac{nalpha^2}{d^2log n} right)$, and provide an efficient algorithm, that, given an $n$-vertex $alpha$-expander of maximum vertex degree at most $d$, and a graph $H$ with $Oleft( frac{nalpha^2}{d^2log n} right)$ vertices and edges, finds a model of $H$ in $G$. Finally, we observe that expanders are the `most minor-rich family of graphs in the following sense: for every $n$-vertex and $m$-edge graph $G$, there exists a graph $H$ with $O left( frac{n+m}{log n} right)$ vertices and edges, such that $H$ is not a minor of $G$.
We give an algorithm for solving unique games (UG) instances whenever low-degree sum-of-squares proofs certify good bounds on the small-set-expansion of the underlying constraint graph via a hypercontractive inequality. Our algorithm is in fact more versatile, and succeeds even when the constraint graph is not a small-set expander as long as the structure of non-expanding small sets is (informally speaking) characterized by a low-degree sum-of-squares proof. Our results are obtained by rounding emph{low-entropy} solutions -- measured via a new global potential function -- to sum-of-squares (SoS) semidefinite programs. This technique adds to the (currently short) list of general tools for analyzing SoS relaxations for emph{worst-case} optimization problems. As corollaries, we obtain the first polynomial-time algorithms for solving any UG instance where the constraint graph is either the emph{noisy hypercube}, the emph{short code} or the emph{Johnson} graph. The prior best algorithm for such instances was the eigenvalue enumeration algorithm of Arora, Barak, and Steurer (2010) which requires quasi-polynomial time for the noisy hypercube and nearly-exponential time for the short code and Johnson graphs. All of our results achieve an approximation of $1-epsilon$ vs $delta$ for UG instances, where $epsilon>0$ and $delta > 0$ depend on the expansion parameters of the graph but are independent of the alphabet size.
151 - Michael Saks , C. Seshadhri 2012
Approximating the length of the longest increasing sequence (LIS) of an array is a well-studied problem. We study this problem in the data stream model, where the algorithm is allowed to make a single left-to-right pass through the array and the key resource to be minimized is the amount of additional memory used. We present an algorithm which, for any $delta > 0$, given streaming access to an array of length $n$ provides a $(1+delta)$-multiplicative approximation to the emph{distance to monotonicity} ($n$ minus the length of the LIS), and uses only $O((log^2 n)/delta)$ space. The previous best known approximation using polylogarithmic space was a multiplicative 2-factor. Our algorithm can be used to estimate the length of the LIS to within an additive $delta n$ for any $delta >0$ while previous algorithms could only achieve additive error $n(1/2-o(1))$. Our algorithm is very simple, being just 3 lines of pseudocode, and has a small update time. It is essentially a polylogarithmic space approximate implementation of a classic dynamic program that computes the LIS. We also give a streaming algorithm for approximating $LCS(x,y)$, the length of the longest common subsequence between strings $x$ and $y$, each of length $n$. Our algorithm works in the asymmetric setting (inspired by cite{AKO10}), in which we have random access to $y$ and streaming access to $x$, and runs in small space provided that no single symbol appears very often in $y$. More precisely, it gives an additive-$delta n$ approximation to $LCS(x,y)$ (and hence also to $E(x,y) = n-LCS(x,y)$, the edit distance between $x$ and $y$ when insertions and deletions, but not substitutions, are allowed), with space complexity $O(k(log^2 n)/delta)$, where $k$ is the maximum number of times any one symbol appears in $y$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا