Do you want to publish a course? Click here

Error bounds for Lanczos-based matrix function approximation

301   0   0.0 ( 0 )
 Added by Tyler Chen
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing $f(mathbf{A}) mathbf{b}$ when $mathbf{A}$ is a Hermitian matrix and $mathbf{b}$ is a given mathbftor. Assuming that $f : mathbb{C} rightarrow mathbb{C}$ is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive {em a priori} and emph{a posteriori} error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of $mathbf{A}$, such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms $mathbf{b}^textsf{H} f(mathbf{A}) mathbf{b}$, and demonstrate the effectiveness of our bounds with numerical experiments.

rate research

Read More

The ubiquitous Lanczos method can approximate $f(A)x$ for any symmetric $n times n$ matrix $A$, vector $x$, and function $f$. In exact arithmetic, the methods error after $k$ iterations is bounded by the error of the best degree-$k$ polynomial uniformly approximating $f(x)$ on the range $[lambda_{min}(A), lambda_{max}(A)]$. However, despite decades of work, it has been unclear if this powerful guarantee holds in finite precision. We resolve this problem, proving that when $max_{x in [lambda_{min}, lambda_{max}]}|f(x)| le C$, Lanczos essentially matches the exact arithmetic guarantee if computations use roughly $log(nC|A|)$ bits of precision. Our proof extends work of Druskin and Knizhnerman [DK91], leveraging the stability of the classic Chebyshev recurrence to bound the stability of any polynomial approximating $f(x)$. We also study the special case of $f(A) = A^{-1}$, where stronger guarantees hold. In exact arithmetic Lanczos performs as well as the best polynomial approximating $1/x$ at each of $A$s eigenvalues, rather than on the full eigenvalue range. In seminal work, Greenbaum gives an approach to extending this bound to finite precision: she proves that finite precision Lanczos and the related CG method match any polynomial approximating $1/x$ in a tiny range around each eigenvalue [Gre89]. For $A^{-1}$, this bound appears stronger than ours. However, we exhibit matrices with condition number $kappa$ where exact arithmetic Lanczos converges in $polylog(kappa)$ iterations, but Greenbaums bound predicts $Omega(kappa^{1/5})$ iterations. It thus cannot offer significant improvement over the $O(kappa^{1/2})$ bound achievable via our result. Our analysis raises the question of if convergence in less than $poly(kappa)$ iterations can be expected in finite precision, even for matrices with clustered, skewed, or otherwise favorable eigenvalue distributions.
Dynamical spectral estimation is a well-established numerical approach for estimating eigenvalues and eigenfunctions of the Markov transition operator from trajectory data. Although the approach has been widely applied in biomolecular simulations, its error properties remain poorly understood. Here we analyze the error of a dynamical spectral estimation method called the variational approach to conformational dynamics (VAC). We bound the approximation error and estimation error for VAC estimates. Our analysis establishes VACs convergence properties and suggests new strategies for tuning VAC to improve accuracy.
Let $A$ be a square complex matrix; $z_1$, ..., $z_{N}inmathbb C$ be arbitrary (possibly repetitive) points of interpolation; $f$ be an analytic function defined on a neighborhood of the convex hull of the union of the spectrum $sigma(A)$ of the matrix $A$ and the points $z_1$, ..., $z_{N}$; and the rational function $r=frac uv$ (with the degree of the numerator $u$ less than $N$) interpolates $f$ at these points (counted according to their multiplicities). Under these assumptions estimates of the kind $$ biglVert f(A)-r(A)bigrVertle max_{tin[0,1];muintext{convex hull}{z_1,z_{2},dots,z_{N}}}bigglVertOmega(A)[v(A)]^{-1} frac{bigl(vfbigr)^{{(N)}} bigl((1-t)mumathbf1+tAbigr)}{N!}biggrVert, $$ where $Omega(z)=prod_{k=1}^N(z-z_k)$, are proposed. As an example illustrating the accuracy of such estimates, an approximation of the impulse response of a dynamic system obtained using the reduced-order Arnoldi method is considered, the actual accuracy of the approximation is compared with the estimate based on this paper.
Due to their importance in both data analysis and numerical algorithms, low rank approximations have recently been widely studied. They enable the handling of very large matrices. Tight error bounds for the computationally efficient Gaussian elimination based methods (skeleton approximations) are available. In practice, these bounds are useful for matrices with singular values which decrease quickly. Using the Chebyshev norm, this paper provides improved bounds for the errors of the matrix elements. These bounds are substantially better in the practically relevant cases where the eigenvalues decrease polynomially. Results are proven for general real rectangular matrices. Even stronger bounds are obtained for symmetric positive definite matrices. A simple example is given, comparing these new bounds to earlier ones.
255 - Di Fang , Albert Tres 2021
Quantum-classical molecular dynamics, as a partial classical limit of the full quantum Schrodinger equation, is a widely used framework for quantum molecular dynamics. The underlying equations are nonlinear in nature, containing a quantum part (represents the electrons) and a classical part (stands for the nuclei). An accurate simulation of the wave function typically requires a time step comparable to the rescaled Planck constant $h$, resulting in a formidable cost when $hll 1$. We prove an additive observable error bound of Schwartz observables for the proposed time-splitting schemes based on semiclassical analysis, which decreases as $h$ becomes smaller. Furthermore, we establish a uniform-in-$h$ observable error bound, which allows an $mathcal{O}(1)$ time step to accurately capture the physical observable regardless of the size of $h$. Numerical results verify our estimates.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا