ترغب بنشر مسار تعليمي؟ اضغط هنا

Linear Size Sparsifier and the Geometry of the Operator Norm Ball

83   0   0.0 ( 0 )
 نشر من قبل Victor Reis
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Matrix Spencer Conjecture asks whether given $n$ symmetric matrices in $mathbb{R}^{n times n}$ with eigenvalues in $[-1,1]$ one can always find signs so that their signed sum has singular values bounded by $O(sqrt{n})$. The standard approach in discrepancy requires proving that the convex body of all good fractional signings is large enough. However, this question has remained wide open due to the lack of tools to certify measure lower bounds for rather small non-polyhedral convex sets. A seminal result by Batson, Spielman and Srivastava from 2008 shows that any undirected graph admits a linear size spectral sparsifier. Again, one can define a convex body of all good fractional signings. We can indeed prove that this body is close to most of the Gaussian measure. This implies that a discrepancy algorithm by the second author can be used to sample a linear size sparsifer. In contrast to previous methods, we require only a logarithmic number of sampling phases.



قيم البحث

اقرأ أيضاً

The known linear-time kernelizations for $d$-Hitting Set guarantee linear worst-case running times using a quadratic-size data structure (that is not fully initialized). Getting rid of this data structure, we show that problem kernels of asymptotical ly optimal size $O(k^d)$ for $d$-Hitting Set are computable in linear time and space. Additionally, we experimentally compare the linear-time kernelizations for $d$-Hitting Set to each other and to a classical data reduction algorithm due to Weihe.
92 - Mohit Singh 2019
We give a characterization result for the integrality gap of the natural linear programming relaxation for the vertex cover problem. We show that integrality gap of the standard linear programming relaxation for any graph G equals $left(2-frac{2}{chi ^f(G)}right)$ where $chi^f(G)$ denotes the fractional chromatic number of G.
In the classical Subset Sum problem we are given a set $X$ and a target $t$, and the task is to decide whether there exists a subset of $X$ which sums to $t$. A recent line of research has resulted in $tilde{O}(t)$-time algorithms, which are (near-)o ptimal under popular complexity-theoretic assumptions. On the other hand, the standard dynamic programming algorithm runs in time $O(n cdot |mathcal{S}(X,t)|)$, where $mathcal{S}(X,t)$ is the set of all subset sums of $X$ that are smaller than $t$. Furthermore, all known pseudopolynomial algorithms actually solve a stronger task, since they actually compute the whole set $mathcal{S}(X,t)$. As the aforementioned two running times are incomparable, in this paper we ask whether one can achieve the best of both worlds: running time $tilde{O}(|mathcal{S}(X,t)|)$. In particular, we ask whether $mathcal{S}(X,t)$ can be computed in near-linear time in the output-size. Using a diverse toolkit containing techniques such as color coding, sparse recovery, and sumset estimates, we make considerable progress towards this question and design an algorithm running in time $tilde{O}(|mathcal{S}(X,t)|^{4/3})$. Central to our approach is the study of top-$k$-convolution, a natural problem of independent interest: given sparse polynomials with non-negative coefficients, compute the lowest $k$ non-zero monomials of their product. We design an algorithm running in time $tilde{O}(k^{4/3})$, by a combination of sparse convolution and sumset estimates considered in Additive Combinatorics. Moreover, we provide evidence that going beyond some of the barriers we have faced requires either an algorithmic breakthrough or possibly new techniques from Additive Combinatorics on how to pass from information on restricted sumsets to information on unrestricted sumsets.
We address counting and optimization variants of multicriteria global min-cut and size-constrained min-$k$-cut in hypergraphs. 1. For an $r$-rank $n$-vertex hypergraph endowed with $t$ hyperedge-cost functions, we show that the number of multiobjec tive min-cuts is $O(r2^{tr}n^{3t-1})$. In particular, this shows that the number of parametric min-cuts in constant rank hypergraphs for a constant number of criteria is strongly polynomial, thus resolving an open question by Aissi, Mahjoub, McCormick, and Queyranne (Math Programming, 2015). In addition, we give randomized algorithms to enumerate all multiobjective min-cuts and all pareto-optimal cuts in strongly polynomial-time. 2. We also address node-budgeted multiobjective min-cuts: For an $n$-vertex hypergraph endowed with $t$ vertex-weight functions, we show that the number of node-budgeted multiobjective min-cuts is $O(r2^{r}n^{t+2})$, where $r$ is the rank of the hypergraph, and the number of node-budgeted $b$-multiobjective min-cuts for a fixed budget-vector $b$ is $O(n^2)$. 3. We show that min-$k$-cut in hypergraphs subject to constant lower bounds on part sizes is solvable in polynomial-time for constant $k$, thus resolving an open problem posed by Queyranne. Our technique also shows that the number of optimal solutions is polynomial. All of our results build on the random contraction approach of Karger (SODA, 1993). Our techniques illustrate the versatility of the random contraction approach to address counting and algorithmic problems concerning multiobjective min-cuts and size-constrained $k$-cuts in hypergraphs.
In the Categorical Clustering problem, we are given a set of vectors (matrix) A={a_1,ldots,a_n} over Sigma^m, where Sigma is a finite alphabet, and integers k and B. The task is to partition A into k clusters such that the median objective of the clu stering in the Hamming norm is at most B. That is, we seek a partition {I_1,ldots,I_k} of {1,ldots,n} and vectors c_1,ldots,c_kinSigma^m such that sum_{i=1}^ksum_{jin I_i}d_h(c_i,a_j)leq B, where d_H(a,b) is the Hamming distance between vectors a and b. Fomin, Golovach, and Panolan [ICALP 2018] proved that the problem is fixed-parameter tractable (for binary case Sigma={0,1}) by giving an algorithm that solves the problem in time 2^{O(Blog B)} (mn)^{O(1)}. We extend this algorithmic result to a popular capacitated clustering model, where in addition the sizes of the clusters should satisfy certain constraints. More precisely, in Capacitated Clustering, in addition, we are given two non-negative integers p and q, and seek a clustering with pleq |I_i|leq q for all iin{1,ldots,k}. Our main theorem is that Capacitated Clustering is solvable in time 2^{O(Blog B)}|Sigma|^B(mn)^{O(1)}. The theorem not only extends the previous algorithmic results to a significantly more general model, it also implies algorithms for several other variants of Categorical Clustering with constraints on cluster sizes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا