ترغب بنشر مسار تعليمي؟ اضغط هنا

A fast algorithm for globally solving Tikhonov regularized total least squares problem

74   0   0.0 ( 0 )
 نشر من قبل Yong Xia
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

The total least squares problem with the general Tikhonov regularization can be reformulated as a one-dimensional parametric minimization problem (PM), where each parameterized function evaluation corresponds to solving an n-dimensional trust region subproblem. Under a mild assumption, the parametric function is differentiable and then an efficient bisection method has been proposed for solving (PM) in literature. In the first part of this paper, we show that the bisection algorithm can be greatly improved by reducing the initially estimated interval covering the optimal parameter. It is observed that the bisection method cannot guarantee to find the globally optimal solution since the nonconvex (PM) could have a local non-global minimizer. The main contribution of this paper is to propose an efficient branch-and-bound algorithm for globally solving (PM), based on a novel underestimation of the parametric function over any given interval using only the information of the parametric function evaluations at the two endpoints. We can show that the new algorithm(BTD Algorithm) returns a global epsilon-approximation solution in a computational effort of at most O(n^3/epsilon) under the same assumption as in the bisection method. The numerical results demonstrate that our new global optimization algorithm performs even much faster than the improved version of the bisection heuristic algorithm.

قيم البحث

اقرأ أيضاً

In recent studies on sparse modeling, $l_q$ ($0<q<1$) regularized least squares regression ($l_q$LS) has received considerable attention due to its superiorities on sparsity-inducing and bias-reduction over the convex counterparts. In this paper, we propose a Gauss-Seidel iterative thresholding algorithm (called GAITA) for solution to this problem. Different from the classical iterative thresholding algorithms using the Jacobi updating rule, GAITA takes advantage of the Gauss-Seidel rule to update the coordinate coefficients. Under a mild condition, we can justify that the support set and sign of an arbitrary sequence generated by GAITA will converge within finite iterations. This convergence property together with the Kurdyka-{L}ojasiewicz property of ($l_q$LS) naturally yields the strong convergence of GAITA under the same condition as above, which is generally weaker than the condition for the convergence of the classical iterative thresholding algorithms. Furthermore, we demonstrate that GAITA converges to a local minimizer under certain additional conditions. A set of numerical experiments are provided to show the effectiveness, particularly, much faster convergence of GAITA as compared with the classical iterative thresholding algorithms.
86 - Fan Wu , Wei Bian , Xiaoping Xue 2021
We investigate a class of constrained sparse regression problem with cardinality penalty, where the feasible set is defined by box constraint, and the loss function is convex, but not necessarily smooth. First, we put forward a smoothing fast iterati ve hard thresholding (SFIHT) algorithm for solving such optimization problems, which combines smoothing approximations, extrapolation techniques and iterative hard thresholding methods. The extrapolation coefficients can be chosen to satisfy $sup_k beta_k=1$ in the proposed algorithm. We discuss the convergence behavior of the algorithm with different extrapolation coefficients, and give sufficient conditions to ensure that any accumulation point of the iterates is a local minimizer of the original cardinality penalized problem. In particular, for a class of fixed extrapolation coefficients, we discuss several different update rules of the smoothing parameter and obtain the convergence rate of $O(ln k/k)$ on the loss and objective function values. Second, we consider the case in which the loss function is Lipschitz continuously differentiable, and develop a fast iterative hard thresholding (FIHT) algorithm to solve it. We prove that the iterates of FIHT converge to a local minimizer of the problem that satisfies a desirable lower bound property. Moreover, we show that the convergence rate of loss and objective function values are $o(k^{-2})$. Finally, some numerical examples are presented to illustrate the theoretical results.
88 - Marc Baboulin 2010
We derive closed formulas for the condition number of a linear function of the total least squares solution. Given an over determined linear system Ax=b, we show that this condition number can be computed using the singular values and the right singu lar vectors of [A,b] and A. We also provide an upper bound that requires the computation of the largest and the smallest singular value of [A,b] and the smallest singular value of A. In numerical examples, we compare these values and the resulting forward error bounds with existing error estimates.
We conduct a study and comparison of superiorization and optimization approaches for the reconstruction problem of superiorized/regularized least-squares solutions of underdetermined linear equations with nonnegativity variable bounds. Regarding supe riorization, the state of the art is examined for this problem class, and a novel approach is proposed that employs proximal mappings and is structurally similar to the established forward-backward optimization approach. Regarding convex optimization, accelerated forward-backward splitting with inexact proximal maps is worked out and applied to both the natural splitting least-squares term/regularizer and to the reverse splitting regularizer/least-squares term. Our numerical findings suggest that superiorization can approach the solution of the optimization problem and leads to comparable results at significantly lower costs, after appropriate parameter tuning. On the other hand, applying accelerated forward-backward optimization to the reverse splitting slightly outperforms superiorization, which suggests that convex optimization can approach superiorization too, using a suitable problem splitting.
215 - Yanjun Zhang , Hanyu Li 2020
We present a novel greedy Gauss-Seidel method for solving large linear least squares problem. This method improves the greedy randomized coordinate descent (GRCD) method proposed recently by Bai and Wu [Bai ZZ, and Wu WT. On greedy randomized coordin ate descent methods for solving large linear least-squares problems. Numer Linear Algebra Appl. 2019;26(4):1--15], which in turn improves the popular randomized Gauss-Seidel method. Convergence analysis of the new method is provided. Numerical experiments show that, for the same accuracy, our method outperforms the GRCD method in term of the computing time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا