ترغب بنشر مسار تعليمي؟ اضغط هنا

On condition numbers of the total least squares problem with linear equality constraint

125   0   0.0 ( 0 )
 نشر من قبل Qiaohua Liu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper is devoted to condition numbers of the total least squares problem with linear equality constraint (TLSE). With novel limit techniques, closed formulae for normwise, mixed and componentwise condition numbers of the TLSE problem are derived. Compact expressions and upper bounds for these condition numbers are also given to avoid the costly Kronecker product-based operations. The results unify the ones for the TLS problem. For TLSE problems with equilibratory input data, numerical experiments illustrate that normwise condition number-based estimate is sharp to evaluate the forward error of the solution, while for sparse and badly scaled matrices, mixed and componentwise condition numbers-based estimations are much tighter.



قيم البحث

اقرأ أيضاً

This paper is devoted to condition numbers of the multidimensional total least squares problem with linear equality constraint (TLSE). Based on the perturbation theory of invariant subspace, the TLSE problem is proved to be equivalent to a multidimen sional unconstrained weighed total least squares problem in the limit sense. With a limit technique, Kronecker-product-based formulae for normwise, mixed and componentwise condition numbers of the minimum Frobenius norm TLSE solution are given. Compact upper bounds of these condition numbers are provided to reduce the storage and computation cost. All expressions and upper bounds of these condition numbers unify the ones for the single-dimensional TLSE problem and multidimensional total least squares problem. Some numerical experiments are performed to illustrate our results.
We consider the problem of efficiently solving large-scale linear least squares problems that have one or more linear constraints that must be satisfied exactly. Whilst some classical approaches are theoretically well founded, they can face difficult ies when the matrix of constraints contains dense rows or if an algorithmic transformation used in the solution process results in a modified problem that is much denser than the original one. To address this, we propose modifications and new ideas, with an emphasis on requiring the constraints are satisfied with a small residual. We examine combining the null-space method with our recently developed algorithm for computing a null space basis matrix for a ``wide matrix. We further show that a direct elimination approach enhanced by careful pivoting can be effective in transforming the problem to an unconstrained sparse-dense least squares problem that can be solved with existing direct or iterative methods. We also present a number of solution variants that employ an augmented system formulation, which can be attractive when solving a sequence of related problems. Numerical experiments using problems coming from practical applications are used throughout to demonstrate the effectiveness of the different approaches.
375 - Hanyu Li , Yanjun Zhang 2020
With a greedy strategy to construct control index set of coordinates firstly and then choosing the corresponding column submatrix in each iteration, we present a greedy block Gauss-Seidel (GBGS) method for solving large linear least squares problem. Theoretical analysis demonstrates that the convergence factor of the GBGS method can be much smaller than that of the greedy randomized coordinate descent (GRCD) method proposed recently in the literature. On the basis of the GBGS method, we further present a pseudoinverse-free greedy block Gauss-Seidel method, which doesnt need to calculate the Moore-Penrose pseudoinverse of the column submatrix in each iteration any more and hence can be achieved greater acceleration. Moreover, this method can also be used for distributed implementations. Numerical experiments show that, for the same accuracy, our methods can far outperform the GRCD method in terms of the iteration number and computing time.
215 - Yanjun Zhang , Hanyu Li 2020
We present a novel greedy Gauss-Seidel method for solving large linear least squares problem. This method improves the greedy randomized coordinate descent (GRCD) method proposed recently by Bai and Wu [Bai ZZ, and Wu WT. On greedy randomized coordin ate descent methods for solving large linear least-squares problems. Numer Linear Algebra Appl. 2019;26(4):1--15], which in turn improves the popular randomized Gauss-Seidel method. Convergence analysis of the new method is provided. Numerical experiments show that, for the same accuracy, our method outperforms the GRCD method in term of the computing time.
We consider best approximation problems in a nonlinear subset $mathcal{M}$ of a Banach space of functions $(mathcal{V},|bullet|)$. The norm is assumed to be a generalization of the $L^2$-norm for which only a weighted Monte Carlo estimate $|bullet|_n $ can be computed. The objective is to obtain an approximation $vinmathcal{M}$ of an unknown function $u in mathcal{V}$ by minimizing the empirical norm $|u-v|_n$. We consider this problem for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and is independent of the nonlinear least squares setting. Several model classes are examined where analytical statements can be made about the RIP and the results are compared to existing sample complexity bounds from the literature. We find that for well-studied model classes our general bound is weaker but exhibits many of the same properties as these specialized bounds. Notably, we demonstrate the advantage of an optimal sampling density (as known for linear spaces) for sets of functions with sparse representations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا