Do you want to publish a course? Click here

Quantile-based Iterative Methods for Corrupted Systems of Linear Equations

174   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Often in applications ranging from medical imaging and sensor networks to error correction and data science (and beyond), one needs to solve large-scale linear systems in which a fraction of the measurements have been corrupted. We consider solving such large-scale systems of linear equations $mathbf{A}mathbf{x}=mathbf{b}$ that are inconsistent due to corruptions in the measurement vector $mathbf{b}$. We develop several variants of iterative methods that converge to the solution of the uncorrupted system of equations, even in the presence of large corruptions. These methods make use of a quantile of the absolute values of the residual vector in determining the iterate update. We present both theoretical and empirical results that demonstrate the promise of these iterative approaches.



rate research

Read More

We consider linear systems $Ax = b$ where $A in mathbb{R}^{m times n}$ consists of normalized rows, $|a_i|_{ell^2} = 1$, and where up to $beta m$ entries of $b$ have been corrupted (possibly by arbitrarily large numbers). Haddock, Needell, Rebrova and Swartworth propose a quantile-based Random Kaczmarz method and show that for certain random matrices $A$ it converges with high likelihood to the true solution. We prove a deterministic version by constructing, for any matrix $A$, a number $beta_A$ such that there is convergence for all perturbations with $beta < beta_A$. Assuming a random matrix heuristic, this proves convergence for tall Gaussian matrices with up to $sim 0.5%$ corruption (a number that can likely be improved).
Projection-based iterative methods for solving large over-determined linear systems are well-known for their simplicity and computational efficiency. It is also known that the correct choice of a sketching procedure (i.e., preprocessing steps that reduce the dimension of each iteration) can improve the performance of iterative methods in multiple ways, such as, to speed up the convergence of the method by fighting inner correlations of the system, or to reduce the variance incurred by the presence of noise. In the current work, we show that sketching can also help us to get better theoretical guarantees for the projection-based methods. Specifically, we use good properties of Gaussian sketching to prove an accelerated convergence rate of the sketched relaxation (also known as Motzkins) method. The new estimates hold for linear systems of arbitrary structure. We also provide numerical experiments in support of our theoretical analysis of the sketched relaxation method.
Measurement data in linear systems arising from real-world applications often suffers from both large, sparse corruptions, and widespread small-scale noise. This can render many popular solvers ineffective, as the least squares solution is far from the desired solution, and the underlying consistent system becomes harder to identify and solve. QuantileRK is a member of the Kaczmarz family of iterative projective methods that has been shown to converge exponentially for systems with arbitrarily large sparse corruptions. In this paper, we extend the analysis to the case where there are not only corruptions present, but also noise that may affect every data point, and prove that QuantileRK converges with the same rate up to an error threshold. We give both theoretical and experimental results demonstrating QuantileRKs strength.
An approach is given for solving large linear systems that combines Krylov methods with use of two different grid levels. Eigenvectors are computed on the coarse grid and used to deflate eigenvalues on the fine grid. GMRES-type methods are first used on both the coarse and fine grids. Then another approach is given that has a restarted BiCGStab (or IDR) method on the fine grid. While BiCGStab is generally considered to be a non-restarted method, it works well in this context with deflating and restarting. Tests show this new approach can be very efficient for difficult linear equations problems.
257 - Frank Uhlig , An-Bao Xu 2021
For a linear matrix function $f$ in $X in R^{mtimes n}$ we consider inhomogeneous linear matrix equations $f(X) = E$ for $E eq 0$ that have or do not have solutions. For such systems we compute optimal norm constrained solutions iteratively using the Conjugate Gradient and Lanczos methods in combination with the More-Sorensen optimizer. We build codes for ten linear matrix equations, of Sylvester, Lyapunov, Stein and structured types and their
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا