ترغب بنشر مسار تعليمي؟ اضغط هنا

Graph Sketching Against Adaptive Adversaries Applied to the Minimum Degree Algorithm

93   0   0.0 ( 0 )
 نشر من قبل Matthew Fahrbach
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Motivated by the study of matrix elimination orderings in combinatorial scientific computing, we utilize graph sketching and local sampling to give a data structure that provides access to approximate fill degrees of a matrix undergoing elimination in $O(text{polylog}(n))$ time per elimination and query. We then study the problem of using this data structure in the minimum degree algorithm, which is a widely-used heuristic for producing elimination orderings for sparse matrices by repeatedly eliminating the vertex with (approximate) minimum fill degree. This leads to a nearly-linear time algorithm for generating approximate greedy minimum degree orderings. Despite extensive studies of algorithms for elimination orderings in combinatorial scientific computing, our result is the first rigorous incorporation of randomized tools in this setting, as well as the first nearly-linear time algorithm for producing elimination orderings with provable approximation guarantees. While our sketching data structure readily works in the oblivious adversary model, by repeatedly querying and greedily updating itself, it enters the adaptive adversarial model where the underlying sketches become prone to failure due to dependency issues with their internal randomness. We show how to use an additional sampling procedure to circumvent this problem and to create an independent access sequence. Our technique for decorrelating the interleaved queries and updates to this randomized data structure may be of independent interest.



قيم البحث

اقرأ أيضاً

The minimum degree algorithm is one of the most widely-used heuristics for reducing the cost of solving large sparse systems of linear equations. It has been studied for nearly half a century and has a rich history of bridging techniques from data st ructures, graph algorithms, and scientific computing. In this paper, we present a simple but novel combinatorial algorithm for computing an exact minimum degree elimination ordering in $O(nm)$ time, which improves on the best known time complexity of $O(n^3)$ and offers practical improvements for sparse systems with small values of $m$. Our approach leverages a careful amortized analysis, which also allows us to derive output-sensitive bounds for the running time of $O(min{msqrt{m^+}, Delta m^+} log n)$, where $m^+$ is the number of unique fill edges and original edges that the algorithm encounters and $Delta$ is the maximum degree of the input graph. Furthermore, we show there cannot exist an exact minimum degree algorithm that runs in $O(nm^{1-varepsilon})$ time, for any $varepsilon > 0$, assuming the strong exponential time hypothesis. This fine-grained reduction goes through the orthogonal vectors problem and uses a new low-degree graph construction called $U$-fillers, which act as pathological inputs and cause any minimum degree algorithm to exhibit nearly worst-case performance. With these two results, we nearly characterize the time complexity of computing an exact minimum degree ordering.
Designing dynamic graph algorithms against an adaptive adversary is a major goal in the field of dynamic graph algorithms. While a few such algorithms are known for spanning trees, matchings, and single-source shortest paths, very little was known fo r an important primitive like graph sparsifiers. The challenge is how to approximately preserve so much information about the graph (e.g., all-pairs distances and all cuts) without revealing the algorithms underlying randomness to the adaptive adversary. In this paper we present the first non-trivial efficient adaptive algorithms for maintaining spanners and cut sparisifers. These algorithms in turn imply improvements over existing algorithms for other problems. Our first algorithm maintains a polylog$(n)$-spanner of size $tilde O(n)$ in polylog$(n)$ amortized update time. The second algorithm maintains an $O(k)$-approximate cut sparsifier of size $tilde O(n)$ in $tilde O(n^{1/k})$ amortized update time, for any $kge1$, which is polylog$(n)$ time when $k=log(n)$. The third algorithm maintains a polylog$(n)$-approximate spectral sparsifier in polylog$(n)$ amortized update time. The amortized update time of both algorithms can be made worst-case by paying some sub-polynomial factors. Prior to our result, there were near-optimal algorithms against oblivious adversaries (e.g. Baswana et al. [TALG12] and Abraham et al. [FOCS16]), but the only non-trivial adaptive dynamic algorithm requires $O(n)$ amortized update time to maintain $3$- and $5$-spanner of size $O(n^{1+1/2})$ and $O(n^{1+1/3})$, respectively [Ausiello et al. ESA05]. Our results are based on two novel techniques. The first technique, is a generic black-box reduction that allows us to assume that the graph undergoes only edge deletions and, more importantly, remains an expander with almost-uniform degree. The second technique we call proactive resampling. [...]
Kernel methods are fundamental tools in machine learning that allow detection of non-linear dependencies between data without explicitly constructing feature vectors in high dimensional spaces. A major disadvantage of kernel methods is their poor sca lability: primitives such as kernel PCA or kernel ridge regression generally take prohibitively large quadratic space and (at least) quadratic time, as kernel matrices are usually dense. Some methods for speeding up kernel linear algebra are known, but they all invariably take time exponential in either the dimension of the input point set (e.g., fast multipole methods suffer from the curse of dimensionality) or in the degree of the kernel function. Oblivious sketching has emerged as a powerful approach to speeding up numerical linear algebra over the past decade, but our understanding of oblivious sketching solutions for kernel matrices has remained quite limited, suffering from the aforementioned exponential dependence on input parameters. Our main contribution is a general method for applying sketching solutions developed in numerical linear algebra over the past decade to a tensoring of data points without forming the tensoring explicitly. This leads to the first oblivious sketch for the polynomial kernel with a target dimension that is only polynomially dependent on the degree of the kernel function, as well as the first oblivious sketch for the Gaussian kernel on bounded datasets that does not suffer from an exponential dependence on the dimensionality of input data points.
Kernel methods are fundamental in machine learning, and faster algorithms for kernel approximation provide direct speedups for many core tasks in machine learning. The polynomial kernel is especially important as other kernels can often be approximat ed by the polynomial kernel via a Taylor series expansion. Recent techniques in oblivious sketching reduce the dependence in the running time on the degree $q$ of the polynomial kernel from exponential to polynomial, which is useful for the Gaussian kernel, for which $q$ can be chosen to be polylogarithmic. However, for more slowly growing kernels, such as the neural tangent and arc-cosine kernels, $q$ needs to be polynomial, and previous work incurs a polynomial factor slowdown in the running time. We give a new oblivious sketch which greatly improves upon this running time, by removing the dependence on $q$ in the leading order term. Combined with a novel sampling scheme, we give the fastest algorithms for approximating a large family of slow-growing kernels.
We study the space complexity of sketching cuts and Laplacian quadratic forms of graphs. We show that any data structure which approximately stores the sizes of all cuts in an undirected graph on $n$ vertices up to a $1+epsilon$ error must use $Omega (nlog n/epsilon^2)$ bits of space in the worst case, improving the $Omega(n/epsilon^2)$ bound of Andoni et al. and matching the best known upper bound achieved by spectral sparsifiers. Our proof is based on a rigidity phenomenon for cut (and spectral) approximation which may be of independent interest: any two $d-$regular graphs which approximate each others cuts significantly better than a random graph approximates the complete graph must overlap in a constant fraction of their edges.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا