ترغب بنشر مسار تعليمي؟ اضغط هنا

A Gauss-Seidel Iterative Thresholding Algorithm for lq Regularized Least Squares Regression

140   0   0.0 ( 0 )
 نشر من قبل Jinshan Zeng
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In recent studies on sparse modeling, $l_q$ ($0<q<1$) regularized least squares regression ($l_q$LS) has received considerable attention due to its superiorities on sparsity-inducing and bias-reduction over the convex counterparts. In this paper, we propose a Gauss-Seidel iterative thresholding algorithm (called GAITA) for solution to this problem. Different from the classical iterative thresholding algorithms using the Jacobi updating rule, GAITA takes advantage of the Gauss-Seidel rule to update the coordinate coefficients. Under a mild condition, we can justify that the support set and sign of an arbitrary sequence generated by GAITA will converge within finite iterations. This convergence property together with the Kurdyka-{L}ojasiewicz property of ($l_q$LS) naturally yields the strong convergence of GAITA under the same condition as above, which is generally weaker than the condition for the convergence of the classical iterative thresholding algorithms. Furthermore, we demonstrate that GAITA converges to a local minimizer under certain additional conditions. A set of numerical experiments are provided to show the effectiveness, particularly, much faster convergence of GAITA as compared with the classical iterative thresholding algorithms.



قيم البحث

اقرأ أيضاً

86 - Fan Wu , Wei Bian , Xiaoping Xue 2021
We investigate a class of constrained sparse regression problem with cardinality penalty, where the feasible set is defined by box constraint, and the loss function is convex, but not necessarily smooth. First, we put forward a smoothing fast iterati ve hard thresholding (SFIHT) algorithm for solving such optimization problems, which combines smoothing approximations, extrapolation techniques and iterative hard thresholding methods. The extrapolation coefficients can be chosen to satisfy $sup_k beta_k=1$ in the proposed algorithm. We discuss the convergence behavior of the algorithm with different extrapolation coefficients, and give sufficient conditions to ensure that any accumulation point of the iterates is a local minimizer of the original cardinality penalized problem. In particular, for a class of fixed extrapolation coefficients, we discuss several different update rules of the smoothing parameter and obtain the convergence rate of $O(ln k/k)$ on the loss and objective function values. Second, we consider the case in which the loss function is Lipschitz continuously differentiable, and develop a fast iterative hard thresholding (FIHT) algorithm to solve it. We prove that the iterates of FIHT converge to a local minimizer of the problem that satisfies a desirable lower bound property. Moreover, we show that the convergence rate of loss and objective function values are $o(k^{-2})$. Finally, some numerical examples are presented to illustrate the theoretical results.
215 - Yanjun Zhang , Hanyu Li 2020
We present a novel greedy Gauss-Seidel method for solving large linear least squares problem. This method improves the greedy randomized coordinate descent (GRCD) method proposed recently by Bai and Wu [Bai ZZ, and Wu WT. On greedy randomized coordin ate descent methods for solving large linear least-squares problems. Numer Linear Algebra Appl. 2019;26(4):1--15], which in turn improves the popular randomized Gauss-Seidel method. Convergence analysis of the new method is provided. Numerical experiments show that, for the same accuracy, our method outperforms the GRCD method in term of the computing time.
Given a linear regression setting, Iterative Least Trimmed Squares (ILTS) involves alternating between (a) selecting the subset of samples with lowest current loss, and (b) re-fitting the linear model only on that subset. Both steps are very fast and simple. In this paper we analyze ILTS in the setting of mixed linear regression with corruptions (MLR-C). We first establish deterministic conditions (on the features etc.) under which the ILTS iterate converges linearly to the closest mixture component. We also provide a global algorithm that uses ILTS as a subroutine, to fully solve mixed linear regressions with corruptions. We then evaluate it for the widely studied setting of isotropic Gaussian features, and establish that we match or better existing results in terms of sample complexity. Finally, we provide an ODE analysis for a gradient-descent variant of ILTS that has optimal time complexity. Our results provide initial theoretical evidence that iteratively fitting to the best subset of samples -- a potentially widely applicable idea -- can provably provide state of the art performance in bad training data settings.
375 - Hanyu Li , Yanjun Zhang 2020
With a greedy strategy to construct control index set of coordinates firstly and then choosing the corresponding column submatrix in each iteration, we present a greedy block Gauss-Seidel (GBGS) method for solving large linear least squares problem. Theoretical analysis demonstrates that the convergence factor of the GBGS method can be much smaller than that of the greedy randomized coordinate descent (GRCD) method proposed recently in the literature. On the basis of the GBGS method, we further present a pseudoinverse-free greedy block Gauss-Seidel method, which doesnt need to calculate the Moore-Penrose pseudoinverse of the column submatrix in each iteration any more and hence can be achieved greater acceleration. Moreover, this method can also be used for distributed implementations. Numerical experiments show that, for the same accuracy, our methods can far outperform the GRCD method in terms of the iteration number and computing time.
The total least squares problem with the general Tikhonov regularization can be reformulated as a one-dimensional parametric minimization problem (PM), where each parameterized function evaluation corresponds to solving an n-dimensional trust region subproblem. Under a mild assumption, the parametric function is differentiable and then an efficient bisection method has been proposed for solving (PM) in literature. In the first part of this paper, we show that the bisection algorithm can be greatly improved by reducing the initially estimated interval covering the optimal parameter. It is observed that the bisection method cannot guarantee to find the globally optimal solution since the nonconvex (PM) could have a local non-global minimizer. The main contribution of this paper is to propose an efficient branch-and-bound algorithm for globally solving (PM), based on a novel underestimation of the parametric function over any given interval using only the information of the parametric function evaluations at the two endpoints. We can show that the new algorithm(BTD Algorithm) returns a global epsilon-approximation solution in a computational effort of at most O(n^3/epsilon) under the same assumption as in the bisection method. The numerical results demonstrate that our new global optimization algorithm performs even much faster than the improved version of the bisection heuristic algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا