No Arabic abstract
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: ${min} |Lx|$ subject to ${min} |Ax - b|$, where $L$ is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to $A$. We use rank-$k$ truncated randomized SVD (TRSVD) approximations to $A$ by truncating the rank-$(k+q)$ RSVD approximations to $A$, where $q$ is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as $k$ increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms.
Based on the joint bidiagonalization process of a large matrix pair ${A,L}$, we propose and develop an iterative regularization algorithm for the large scale linear discrete ill-posed problems in general-form regularization: $min|Lx| mbox{{rm subject to}} xinmathcal{S} = {x| |Ax-b|leq tau|e|}$ with a Gaussian white noise $e$ and $tau>1$ slightly, where $L$ is a regularization matrix. Our algorithm is different from the hybrid one proposed by Kilmer {em et al.}, which is based on the same process but solves the general-form Tikhonov regularization problem: $min_xleft{|Ax-b|^2+lambda^2|Lx|^2right}$. We prove that the iterates take the form of attractive filtered generalized singular value decomposition (GSVD) expansions, where the filters are given explicitly. This result and the analysis on it show that the method must have the desired semi-convergence property and get insight into the regularizing effects of the method. We use the L-curve criterion or the discrepancy principle to determine $k^*$. The algorithm is simple and effective, and numerical experiments illustrate that it often computes more accurate regularized solutions than the hybrid one.
Many applications in science and engineering require the solution of large linear discrete ill-posed problems that are obtained by the discretization of a Fredholm integral equation of the first kind in several space-dimensions. The matrix that defines these problems is very ill-conditioned and generally numerically singular, and the right-hand side, which represents measured data, typically is contaminated by measurement error. Straightforward solution of these problems generally is not meaningful due to severe error propagation. Tikhonov regularization seeks to alleviate this difficulty by replacing the given linear discrete ill-posed problem by a penalized least-squares problem, whose solution is less sensitive to the error in the right-hand side and to round-off errors introduced during the computations. This paper discusses the construction of penalty terms that are determined by solving a matrix-nearness problem. These penalty terms allow partial transformation to standard form of Tikhonov regularization problems that stem from the discretization of integral equations on a cube in several space-dimensions.
GMRES is one of the most popular iterative methods for the solution of large linear systems of equations that arise from the discretization of linear well-posed problems, such as Dirichlet boundary value problems for elliptic partial differential equations. The method is also applied to iteratively solve linear systems of equations that are obtained by discretizing linear ill-posed problems, such as many inverse problems. However, GMRES does not always perform well when applied to the latter kind of problems. This paper seeks to shed some light on reasons for the poor performance of GMRES in certain situations, and discusses some remedies based on specific kinds of preconditioning. The standard implementation of GMRES is based on the Arnoldi process, which also can be used to define a solution subspace for Tikhonov or TSVD regularization, giving rise to the Arnoldi-Tikhonov and Arnoldi-TSVD methods, respectively. The performance of the GMRES, the Arnoldi-Tikhonov, and the Arnoldi-TSVD methods is discussed. Numerical examples illustrate properties of these methods.
The hierarchical SVD provides a quasi-best low rank approximation of high dimensional data in the hierarchical Tucker framework. Similar to the SVD for matrices, it provides a fundamental but expensive tool for tensor computations. In the present work we examine generalizations of randomized matrix decomposition methods to higher order tensors in the framework of the hierarchical tensors representation. In particular we present and analyze a randomized algorithm for the calculation of the hierarchical SVD (HSVD) for the tensor train (TT) format.
Quaternion matrix approximation problems construct the approximated matrix via the quaternion singular value decomposition (SVD) by selecting some singular value decomposition (SVD) triplets of quaternion matrices. In applications such as color image processing and recognition problems, only a small number of dominant SVD triplets are selected, while in some applications such as quaternion total least squares problem, small SVD triplets (small singular values and associated singular vectors) and numerical rank with respect to a small threshold are required. In this paper, we propose a randomized quaternion SVD (verbrandsvdQ) method to compute a small number of SVD triplets of a large-scale quaternion matrix. Theoretical results are given about approximation errors and the corresponding algorithm adapts to the low-rank matrix approximation problem. When the restricted rank increases, it might lead to information loss of small SVD triplets. The blocked quaternion randomized SVD algorithm is then developed when the numerical rank and information about small singular values are required. For color face recognition problems, numerical results show good performance of the developed quaternion randomized SVD method for low-rank approximation of a large-scale quaternion matrix. The blocked randomized SVD algorithm is also shown to be more robust than unblocked method through several experiments, and approximation errors from the blocked scheme are very close to the optimal error obtained by truncating a full SVD.