Do you want to publish a course? Click here

Subgraph Sparsification and Nearly Optimal Ultrasparsifiers

173   0   0.0 ( 0 )
 Added by Alexandra Kolla
 Publication date 2009
and research's language is English




Ask ChatGPT about the research

We consider a variation of the spectral sparsification problem where we are required to keep a subgraph of the original graph. Formally, given a union of two weighted graphs $G$ and $W$ and an integer $k$, we are asked to find a $k$-edge weighted graph $W_k$ such that $G+W_k$ is a good spectral sparsifer of $G+W$. We will refer to this problem as the subgraph (spectral) sparsification. We present a nontrivial condition on $G$ and $W$ such that a good sparsifier exists and give a polynomial time algorithm to find the sparsifer. %$O(frac{n}{k})log n tilde{O}(log log n)$ As a significant application of our technique, we show that for each positive integer $k$, every $n$-vertex weighted graph has an $(n-1+k)$-edge spectral sparsifier with relative condition number at most $frac{n}{k} log n tilde{O}(loglog n)$ where $tilde{O}()$ hides lower order terms. Our bound is within a factor of $tilde{O}(log log n)$ from optimal. This nearly settles a question left open by Spielman and Teng about ultrasparsifiers, which is a key component in their nearly linear-time algorithms for solving diagonally dominant symmetric linear systems. We also present another application of our technique to spectral optimization in which the goal is to maximize the algebraic connectivity of a graph (e.g. turn it into an expander) with a limited number of edges.



rate research

Read More

How might one reduce a graph? That is, generate a smaller graph that preserves the global structure at the expense of discarding local details? There has been extensive work on both graph sparsification (removing edges) and graph coarsening (merging nodes, often by edge contraction); however, these operations are currently treated separately. Interestingly, for a planar graph, edge deletion corresponds to edge contraction in its planar dual (and more generally, for a graphical matroid and its dual). Moreover, with respect to the dynamics induced by the graph Laplacian (e.g., diffusion), deletion and contraction are physical manifestations of two reciprocal limits: edge weights of $0$ and $infty$, respectively. In this work, we provide a unifying framework that captures both of these operations, allowing one to simultaneously sparsify and coarsen a graph while preserving its large-scale structure. The limit of infinite edge weight is rarely considered, as many classical notions of graph similarity diverge. However, its algebraic, geometric, and physical interpretations are reflected in the Laplacian pseudoinverse $mathbf{mathit{L}}^{dagger}$, which remains finite in this limit. Motivated by this insight, we provide a probabilistic algorithm that reduces graphs while preserving $mathbf{mathit{L}}^{dagger}$, using an unbiased procedure that minimizes its variance. We compare our algorithm with several existing sparsification and coarsening algorithms using real-world datasets, and demonstrate that it more accurately preserves the large-scale structure.
The $k$-dimensional Weisfeiler-Leman algorithm ($k$-WL) is a fruitful approach to the Graph Isomorphism problem. 2-WL corresponds to the original algorithm suggested by Weisfeiler and Leman over 50 years ago. 1-WL is the classical color refinement routine. Indistinguishability by $k$-WL is an equivalence relation on graphs that is of fundamental importance for isomorphism testing, descriptive complexity theory, and graph similarity testing which is also of some relevance in artificial intelligence. Focusing on dimensions $k=1,2$, we investigate subgraph patterns whose counts are $k$-WL invariant, and whose occurrence is $k$-WL invariant. We achieve a complete description of all such patterns for dimension $k=1$ and considerably extend the previous results known for $k=2$.
In the group testing problem the aim is to identify a small set of $ksim n^theta$ infected individuals out of a population size $n$, $0<theta<1$. We avail ourselves of a test procedure capable of testing groups of individuals, with the test returning a positive result iff at least one individual in the group is infected. The aim is to devise a test design with as few tests as possible so that the set of infected individuals can be identified correctly with high probability. We establish an explicit sharp information-theoretic/algorithmic phase transition $minf$ for non-adaptive group testing, where all tests are conducted in parallel. Thus, with more than $minf$ tests the infected individuals can be identified in polynomial time whp, while learning the set of infected individuals is information-theoretically impossible with fewer tests. In addition, we develop an optimal adaptive scheme where the tests are conducted in two stages.
96 - Vasileios Nakos 2019
In the sparse polynomial multiplication problem, one is asked to multiply two sparse polynomials f and g in time that is proportional to the size of the input plus the size of the output. The polynomials are given via lists of their coefficients F and G, respectively. Cole and Hariharan (STOC 02) have given a nearly optimal algorithm when the coefficients are positive, and Arnold and Roche (ISSAC 15) devised an algorithm running in time proportional to the structural sparsity of the product, i.e. the set supp(F)+supp(G). The latter algorithm is particularly efficient when there not too many cancellations of coefficients in the product. In this work we give a clean, nearly optimal algorithm for the sparse polynomial multiplication problem.
96 - Alina Ene , Huy L. Nguyen 2018
In this paper, we study the tradeoff between the approximation guarantee and adaptivity for the problem of maximizing a monotone submodular function subject to a cardinality constraint. The adaptivity of an algorithm is the number of sequential rounds of queries it makes to the evaluation oracle of the function, where in every round the algorithm is allowed to make polynomially-many parallel queries. Adaptivity is an important consideration in settings where the objective function is estimated using samples and in applications where adaptivity is the main running time bottleneck. Previous algorithms achieving a nearly-optimal $1 - 1/e - epsilon$ approximation require $Omega(n)$ rounds of adaptivity. In this work, we give the first algorithm that achieves a $1 - 1/e - epsilon$ approximation using $O(ln{n} / epsilon^2)$ rounds of adaptivity. The number of function evaluations and additional running time of the algorithm are $O(n mathrm{poly}(log{n}, 1/epsilon))$.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا