Do you want to publish a course? Click here

Unifying Matrix Data Structures: Simplifying and Speeding up Iterative Algorithms

81   0   0.0 ( 0 )
 Added by Jan van den Brand
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Many algorithms use data structures that maintain properties of matrices undergoing some changes. The applications are wide-ranging and include for example matchings, shortest paths, linear programming, semi-definite programming, convex hull and volume computation. Given the wide range of applications, the exact property these data structures must maintain varies from one application to another, forcing algorithm designers to invent them from scratch or modify existing ones. Thus it is not surprising that these data structures and their proofs are usually tailor-made for their specific application and that maintaining more complicated properties results in more complicated proofs. In this paper we present a unifying framework that captures a wide range of these data structures. The simplicity of this framework allows us to give short proofs for many existing data structures regardless of how complicated the to be maintained property is. We also show how the framework can be used to speed up existing iterative algorithms, such as the simplex algorithm. More formally, consider any rational function $f(A_1,...,A_d)$ with input matrices $A_1,...,A_d$. We show that the task of maintaining $f(A_1,...,A_d)$ under updates to $A_1,...,A_d$ can be reduced to the much simpler problem of maintaining some matrix inverse $M^{-1}$ under updates to $M$. The latter is a well studied problem called dynamic matrix inverse. By applying our reduction and using known algorithms for dynamic matrix inverse we can obtain fast data structures and iterative algorithms for much more general problems.



rate research

Read More

196 - Nathan Lindzey 2014
Given a graph $G$, a vertex switch of $v in V(G)$ results in a new graph where neighbors of $v$ become nonneighbors and vice versa. This operation gives rise to an equivalence relation over the set of labeled digraphs on $n$ vertices. The equivalence class of $G$ with respect to the switching operation is commonly referred to as $G$s switching class. The algebraic and combinatorial properties of switching classes have been studied in depth; however, they have not been studied as thoroughly from an algorithmic point of view. The intent of this work is to further investigate the algorithmic properties of switching classes. In particular, we show that switching classes can be used to asymptotically speed up several super-linear unweighted graph algorithms. The current techniques for speeding up graph algorithms are all somewhat involved insofar that they employ sophisticated pre-processing, data-structures, or use word tricks on the RAM model to achieve at most a $O(log(n))$ speed up for sufficiently dense graphs. Our methods are much simpler and can result in super-polylogarithmic speedups. In particular, we achieve better bounds for diameter, transitive closure, bipartite maximum matching, and general maximum matching.
In this paper we show how to recover a spectral approximations to broad classes of structured matrices using only a polylogarithmic number of adaptive linear measurements to either the matrix or its inverse. Leveraging this result we obtain faster algorithms for variety of linear algebraic problems. Key results include: $bullet$ A nearly linear time algorithm for solving the inverse of symmetric $M$-matrices, a strict superset of Laplacians and SDD matrices. $bullet$ An $tilde{O}(n^2)$ time algorithm for solving $n times n$ linear systems that are constant spectral approximations of Laplacians or more generally, SDD matrices. $bullet$ An $tilde{O}(n^2)$ algorithm to recover a spectral approximation of a $n$-vertex graph using only $tilde{O}(1)$ matrix-vector multiplies with its Laplacian matrix. The previous best results for each problem either used a trivial number of queries to exactly recover the matrix or a trivial $O(n^omega)$ running time, where $omega$ is the matrix multiplication constant. We achieve these results by generalizing recent semidefinite programming based linear sized sparsifier results of Lee and Sun (2017) and providing iterative methods inspired by the semistreaming sparsification results of Kapralov, Lee, Musco, Musco and Sidford (2014) and input sparsity time linear system solving results of Li, Miller, and Peng (2013). We hope that by initiating study of these natural problems, expanding the robustness and scope of recent nearly linear time linear system solving research, and providing general matrix recovery machinery this work may serve as a stepping stone for faster algorithms.
In this paper, we design parallel write-efficient geometric algorithms that perform asymptotically fewer writes than standard algorithms for the same problem. This is motivated by emerging non-volatile memory technologies with read performance being close to that of random access memory but writes being significantly more expensive in terms of energy and latency. We design algorithms for planar Delaunay triangulation, $k$-d trees, and static and dynamic augmented trees. Our algorithms are designed in the recently introduced Asymmetric Nested-Parallel Model, which captures the parallel setting in which there is a small symmetric memory where reads and writes are unit cost as well as a large asymmetric memory where writes are $omega$ times more expensive than reads. In designing these algorithms, we introduce several techniques for obtaining write-efficiency, including DAG tracing, prefix doubling, reconstruction-based rebalancing and $alpha$-labeling, which we believe will be useful for designing other parallel write-efficient algorithms.
A wide range of modern science and engineering applications are formulated as optimization problems with a system of partial differential equations (PDEs) as constraints. These PDE-constrained optimization problems are typically solved in a standard discretize-then-optimize approach. In many industry applications that require high-resolution solutions, the discretized constraints can easily have millions or even billions of variables, making it very slow for the standard iterative optimizer to solve the exact gradients. In this work, we propose a general framework to speed up PDE-constrained optimization using online neural synthetic gradients (ONSG) with a novel two-scale optimization scheme. We successfully apply our ONSG framework to computational morphogenesis, a representative and challenging class of PDE-constrained optimization problems. Extensive experiments have demonstrated that our method can significantly speed up computational morphogenesis (also known as topology optimization), and meanwhile maintain the quality of final solution compared to the standard optimizer. On a large-scale 3D optimal design problem with around 1,400,000 design variables, our method achieves up to 7.5x speedup while producing optimized designs with comparable objectives.
Hierarchical clustering is a fundamental task often used to discover meaningful structures in data, such as phylogenetic trees, taxonomies of concepts, subtypes of cancer, and cascades of particle decays in particle physics. Typically approximate algorithms are used for inference due to the combinatorial number of possible hierarchical clusterings. In contrast to existing methods, we present novel dynamic-programming algorithms for emph{exact} inference in hierarchical clustering based on a novel trellis data structure, and we prove that we can exactly compute the partition function, maximum likelihood hierarchy, and marginal probabilities of sub-hierarchies and clusters. Our algorithms scale in time and space proportional to the powerset of $N$ elements which is super-exponentially more efficient than explicitly considering each of the (2N-3)!! possible hierarchies. Also, for larger datasets where our exact algorithms become infeasible, we introduce an approximate algorithm based on a sparse trellis that compares well to other benchmarks. Exact methods are relevant to data analyses in particle physics and for finding correlations among gene expression in cancer genomics, and we give examples in both areas, where our algorithms outperform greedy and beam search baselines. In addition, we consider Dasguptas cost with synthetic data.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا