ترغب بنشر مسار تعليمي؟ اضغط هنا

A Note on Error Bounds for Pseudo Skeleton Approximations of Matrices

123   0   0.0 ( 0 )
 نشر من قبل Markus Hegland
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Due to their importance in both data analysis and numerical algorithms, low rank approximations have recently been widely studied. They enable the handling of very large matrices. Tight error bounds for the computationally efficient Gaussian elimination based methods (skeleton approximations) are available. In practice, these bounds are useful for matrices with singular values which decrease quickly. Using the Chebyshev norm, this paper provides improved bounds for the errors of the matrix elements. These bounds are substantially better in the practically relevant cases where the eigenvalues decrease polynomially. Results are proven for general real rectangular matrices. Even stronger bounds are obtained for symmetric positive definite matrices. A simple example is given, comparing these new bounds to earlier ones.



قيم البحث

اقرأ أيضاً

Dynamical spectral estimation is a well-established numerical approach for estimating eigenvalues and eigenfunctions of the Markov transition operator from trajectory data. Although the approach has been widely applied in biomolecular simulations, it s error properties remain poorly understood. Here we analyze the error of a dynamical spectral estimation method called the variational approach to conformational dynamics (VAC). We bound the approximation error and estimation error for VAC estimates. Our analysis establishes VACs convergence properties and suggests new strategies for tuning VAC to improve accuracy.
We analyze the Lanczos method for matrix function approximation (Lanczos-FA), an iterative algorithm for computing $f(mathbf{A}) mathbf{b}$ when $mathbf{A}$ is a Hermitian matrix and $mathbf{b}$ is a given mathbftor. Assuming that $f : mathbb{C} righ tarrow mathbb{C}$ is piecewise analytic, we give a framework, based on the Cauchy integral formula, which can be used to derive {em a priori} and emph{a posteriori} error bounds for Lanczos-FA in terms of the error of Lanczos used to solve linear systems. Unlike many error bounds for Lanczos-FA, these bounds account for fine-grained properties of the spectrum of $mathbf{A}$, such as clustered or isolated eigenvalues. Our results are derived assuming exact arithmetic, but we show that they are easily extended to finite precision computations using existing theory about the Lanczos algorithm in finite precision. We also provide generalized bounds for the Lanczos method used to approximate quadratic forms $mathbf{b}^textsf{H} f(mathbf{A}) mathbf{b}$, and demonstrate the effectiveness of our bounds with numerical experiments.
255 - Di Fang , Albert Tres 2021
Quantum-classical molecular dynamics, as a partial classical limit of the full quantum Schrodinger equation, is a widely used framework for quantum molecular dynamics. The underlying equations are nonlinear in nature, containing a quantum part (repre sents the electrons) and a classical part (stands for the nuclei). An accurate simulation of the wave function typically requires a time step comparable to the rescaled Planck constant $h$, resulting in a formidable cost when $hll 1$. We prove an additive observable error bound of Schwartz observables for the proposed time-splitting schemes based on semiclassical analysis, which decreases as $h$ becomes smaller. Furthermore, we establish a uniform-in-$h$ observable error bound, which allows an $mathcal{O}(1)$ time step to accurately capture the physical observable regardless of the size of $h$. Numerical results verify our estimates.
Artificial neural networks (ANNs) have become a very powerful tool in the approximation of high-dimensional functions. Especially, deep ANNs, consisting of a large number of hidden layers, have been very successfully used in a series of practical rel evant computational problems involving high-dimensional input data ranging from classification tasks in supervised learning to optimal decision problems in reinforcement learning. There are also a number of mathematical results in the scientific literature which study the approximation capacities of ANNs in the context of high-dimensional target functions. In particular, there are a series of mathematical results in the scientific literature which show that sufficiently deep ANNs have the capacity to overcome the curse of dimensionality in the approximation of certain target function classes in the sense that the number of parameters of the approximating ANNs grows at most polynomially in the dimension $d in mathbb{N}$ of the target functions under considerations. In the proofs of several of such high-dimensional approximation results it is crucial that the involved ANNs are sufficiently deep and consist a sufficiently large number of hidden layers which grows in the dimension of the considered target functions. It is the topic of this work to look a bit more detailed to the deepness of the involved ANNs in the approximation of high-dimensional target functions. In particular, the main result of this work proves that there exists a concretely specified sequence of functions which can be approximated without the curse of dimensionality by sufficiently deep ANNs but which cannot be approximated without the curse of dimensionality if the involved ANNs are shallow or not deep enough.
We give upper and lower bounds on the determinant of a perturbation of the identity matrix or, more generally, a perturbation of a nonsingular diagonal matrix. The matrices considered are, in general, diagonally dominant. The lower bounds are best po ssible, and in several cases they are stronger than well-known bounds due to Ostrowski and other authors. If $A = I-E$ is an $n times n$ matrix and the elements of $E$ are bounded in absolute value by $varepsilon le 1/n$, then a lower bound of Ostrowski (1938) is $det(A) ge 1-nvarepsilon$. We show that if, in addition, the diagonal elements of $E$ are zero, then a best-possible lower bound is [det(A) ge (1-(n-1)varepsilon),(1+varepsilon)^{n-1}.] Corresponding upper bounds are respectively [det(A) le (1 + 2varepsilon + nvarepsilon^2)^{n/2}] and [det(A) le (1 + (n-1)varepsilon^2)^{n/2}.] The first upper bound is stronger than Ostrowskis bound (for $varepsilon < 1/n$) $det(A) le (1 - nvarepsilon)^{-1}$. The second upper bound generalises Hadamards inequality, which is the case $varepsilon = 1$. A necessary and sufficient condition for our upper bounds to be best possible for matrices of order $n$ and all positive $varepsilon$ is the existence of a skew-Hadamard matrix of order $n$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا