Do you want to publish a course? Click here

Stability of the Lanczos Method for Matrix Function Approximation

106   0   0.0 ( 0 )
 Added by Christopher Musco
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

The ubiquitous Lanczos method can approximate $f(A)x$ for any symmetric $n times n$ matrix $A$, vector $x$, and function $f$. In exact arithmetic, the methods error after $k$ iterations is bounded by the error of the best degree-$k$ polynomial uniformly approximating $f(x)$ on the range $[lambda_{min}(A), lambda_{max}(A)]$. However, despite decades of work, it has been unclear if this powerful guarantee holds in finite precision. We resolve this problem, proving that when $max_{x in [lambda_{min}, lambda_{max}]}|f(x)| le C$, Lanczos essentially matches the exact arithmetic guarantee if computations use roughly $log(nC|A|)$ bits of precision. Our proof extends work of Druskin and Knizhnerman [DK91], leveraging the stability of the classic Chebyshev recurrence to bound the stability of any polynomial approximating $f(x)$. We also study the special case of $f(A) = A^{-1}$, where stronger guarantees hold. In exact arithmetic Lanczos performs as well as the best polynomial approximating $1/x$ at each of $A$s eigenvalues, rather than on the full eigenvalue range. In seminal work, Greenbaum gives an approach to extending this bound to finite precision: she proves that finite precision Lanczos and the related CG method match any polynomial approximating $1/x$ in a tiny range around each eigenvalue [Gre89]. For $A^{-1}$, this bound appears stronger than ours. However, we exhibit matrices with condition number $kappa$ where exact arithmetic Lanczos converges in $polylog(kappa)$ iterations, but Greenbaums bound predicts $Omega(kappa^{1/5})$ iterations. It thus cannot offer significant improvement over the $O(kappa^{1/2})$ bound achievable via our result. Our analysis raises the question of if convergence in less than $poly(kappa)$ iterations can be expected in finite precision, even for matrices with clustered, skewed, or otherwise favorable eigenvalue distributions.



rate research

Read More

We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing $f(mathbf{A}) mathbf{b}$ when $mathbf{A}$ is a Hermitian matrix and $mathbf{b}$ is a given mathbftor. Assuming that $f : mathbb{C} rightarrow mathbb{C}$ is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive {em a priori} and emph{a posteriori} error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of $mathbf{A}$, such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms $mathbf{b}^textsf{H} f(mathbf{A}) mathbf{b}$, and demonstrate the effectiveness of our bounds with numerical experiments.
123 - Philippe G. LeFloch 2008
We develop a version of Haar and Holmgren methods which applies to discontinuous solutions of nonlinear hyperbolic systems and allows us to control the L1 distance between two entropy solutions. The main difficulty is to cope with linear hyperbolic systems with discontinuous coefficients. Our main observation is that, while entropy solutions contain compressive shocks only, the averaged matrix associated with two such solutions has compressive or undercompressive shocks, but no rarefaction-shocks -- which are recognized as a source for non-uniqueness and instability. Our Haar-Holmgren-type method rests on the geometry associated with the averaged matrix and takes into account adjoint problems and wave cancellations along generalized characteristics. It extends the method proposed earlier by LeFloch et al. for genuinely nonlinear systems. In the present paper, we cover solutions with small total variation and a class of systems with general flux that need not be genuinely nonlinear and includes for instance fluid dynamics equations. We prove that solutions generated by Glimm or front tracking schemes depend continuously in the L1 norm upon their initial data, by exhibiting an L1 functional controling the distance between two solutions.
We provide a randomized linear time approximation scheme for a generic problem about clustering of binary vectors subject to additional constrains. The new constrained clustering problem encompasses a number of problems and by solving it, we obtain the first linear time-approximation schemes for a number of well-studied fundamental problems concerning clustering of binary vectors and low-rank approximation of binary matrices. Among the problems solvable by our approach are textsc{Low GF(2)-Rank Approximation}, textsc{Low Boolean-Rank Approximation}, and vario
102 - Moses Charikar , Lunjia Hu 2021
In the non-negative matrix factorization (NMF) problem, the input is an $mtimes n$ matrix $M$ with non-negative entries and the goal is to factorize it as $Mapprox AW$. The $mtimes k$ matrix $A$ and the $ktimes n$ matrix $W$ are both constrained to have non-negative entries. This is in contrast to singular value decomposition, where the matrices $A$ and $W$ can have negative entries but must satisfy the orthogonality constraint: the columns of $A$ are orthogonal and the rows of $W$ are also orthogonal. The orthogonal non-negative matrix factorization (ONMF) problem imposes both the non-negativity and the orthogonality constraints, and previous work showed that it leads to better performances than NMF on many clustering tasks. We give the first constant-factor approximation algorithm for ONMF when one or both of $A$ and $W$ are subject to the orthogonality constraint. We also show an interesting connection to the correlation clustering problem on bipartite graphs. Our experiments on synthetic and real-world data show that our algorithm achieves similar or smaller errors compared to previous ONMF algorithms while ensuring perfect orthogonality (many previous algorithms do not satisfy the hard orthogonality constraint).
We provide a number of algorithmic results for the following family of problems: For a given binary mtimes n matrix A and integer k, decide whether there is a simple binary matrix B which differs from A in at most k entries. For an integer r, the simplicity of B is characterized as follows. - Binary r-Means: Matrix B has at most r different columns. This problem is known to be NP-complete already for r=2. We show that the problem is solvable in time 2^{O(klog k)}cdot(nm)^{O(1)} and thus is fixed-parameter tractable parameterized by k. We prove that the problem admits a polynomial kernel when parameterized by r and k but it has no polynomial kernel when parameterized by k only unless NPsubseteq coNP/poly. We also complement these result by showing that when being parameterized by r and k, the problem admits an algorithm of running time 2^{O(rcdot sqrt{klog{(k+r)}})}(nm)^{O(1)}, which is subexponential in k for rin O(k^{1/2 -epsilon}) for any epsilon>0. - Low GF(2)-Rank Approximation: Matrix B is of GF(2)-rank at most r. This problem is known to be NP-complete already for r=1. It also known to be W[1]-hard when parameterized by k. Interestingly, when parameterized by r and k, the problem is not only fixed-parameter tractable, but it is solvable in time 2^{O(r^{ 3/2}cdot sqrt{klog{k}})}(nm)^{O(1)}, which is subexponential in k. - Low Boolean-Rank Approximation: Matrix B is of Boolean rank at most r. The problem is known to be NP-complete for k=0 as well as for r=1. We show that it is solvable in subexponential in k time 2^{O(r2^rcdot sqrt{klog k})}(nm)^{O(1)}.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا