No Arabic abstract
Spectral computations of infinite-dimensional operators are notoriously difficult, yet ubiquitous in the sciences. Indeed, despite more than half a century of research, it is still unknown which classes of operators allow for computation of spectra and eigenvectors with convergence rates and error control. Recent progress in classifying the difficulty of spectral problems into complexity hierarchies has revealed that the most difficult spectral problems are so hard that one needs three limits in the computation, and no convergence rates nor error control is possible. This begs the question: which classes of operators allow for computations with convergence rates and error control? In this paper we address this basic question, and the algorithm used is an infinite-dimensional version of the QR algorithm. Indeed, we generalise the QR algorithm to infinite-dimensional operators. We prove that not only is the algorithm executable on a finite machine, but one can also recover the extremal parts of the spectrum and corresponding eigenvectors, with convergence rates and error control. This allows for new classification results in the hierarchy of computational problems that existing algorithms have not been able to capture. The algorithm and convergence theorems are demonstrated on a wealth of examples with comparisons to standard approaches (that are notorious for providing false solutions).We also find that in some cases the IQR algorithm performs better than predicted by theory and make conjectures for future study.
New real structure-preserving decompositions are introduced to develop fast and robust algorithms for the (right) eigenproblem of general quaternion matrices. Under the orthogonally JRS-symplectic transformations, the Francis JRS-QR step and the JRS-QR algorithm are firstly proposed for JRS-symmetric matrices and then applied to calculate the Schur forms of quaternion matrices. A novel quaternion Givens matrix is defined and utilized to compute the QR factorization of quaternion Hessenberg matrices. An implicit double shift quaternion QR algorithm is presented with a technique for automatically choosing shifts and within real operations. Numerical experiments are provided to demonstrate the efficiency and accuracy of newly proposed algorithms.
Some fast algorithms for computing the eigenvalues of a block companion matrix $A = U + XY^H$, where $Uin mathbb C^{ntimes n}$ is unitary block circulant and $X, Y inmathbb{C}^{n times k}$, have recently appeared in the literature. Most of these algorithms rely on the decomposition of $A$ as product of scalar companion matrices which turns into a factored representation of the Hessenberg reduction of $A$. In this paper we generalize the approach to encompass Hessenberg matrices of the form $A=U + XY^H$ where $U$ is a general unitary matrix. A remarkable case is $U$ unitary diagonal which makes possible to deal with interpolation techniques for rootfinding problems and nonlinear eigenvalue problems. Our extension exploits the properties of a larger matrix $hat A$ obtained by a certain embedding of the Hessenberg reduction of $A$ suitable to maintain its structural properties. We show that $hat A$ can be factored as product of lower and upper unitary Hessenberg matrices possibly perturbed in the first $k$ rows, and, moreover, such a data-sparse representation is well suited for the design of fast eigensolvers based on the QR/QZ iteration. The resulting algorithm is fast and backward stable.
In this paper, we consider the tensor completion problem, which has many researchers in the machine learning particularly concerned. Our fast and precise method is built on extending the $L_{2,1}$-norm minimization and Qatar Riyal decomposition (LNM-QR) method for matrix completions to tensor completions, and is different from the popular tensor completion methods using the tensor singular value decomposition (t-SVD). In terms of shortening the computing time, t-SVD is replaced with the method computing an approximate t-SVD based on Qatar Riyal decomposition (CTSVD-QR), which can be used to compute the largest $r left(r>0 right)$ singular values (tubes) and their associated singular vectors (of tubes) iteratively. We, in addition, use the tensor $L_{2,1}$-norm instead of the tensor nuclear norm to minimize our model on account of it is easy to optimize. Then in terms of improving accuracy, ADMM, a gradient-search-based method, plays a crucial part in our method. Numerical experimental results show that our method is faster than those state-of-the-art algorithms and have excellent accuracy.
The Gaver-Stehfest algorithm is widely used for numerical inversion of Laplace transform. In this paper we provide the first rigorous study of the rate of convergence of the Gaver-Stehfest algorithm. We prove that Gaver-Stehfest approximations converge exponentially fast if the target function is analytic in a neighbourhood of a point and they converge at a rate $o(n^{-k})$ if the target function is $(2k+3)$-times differentiable at a point.
This paper analyzes the generalization error of two-layer neural networks for computing the ground state of the Schrodinger operator on a $d$-dimensional hypercube. We prove that the convergence rate of the generalization error is independent of the dimension $d$, under the a priori assumption that the ground state lies in a spectral Barron space. We verify such assumption by proving a new regularity estimate for the ground state in the spectral Barron space. The later is achieved by a fixed point argument based on the Krein-Rutman theorem.