ترغب بنشر مسار تعليمي؟ اضغط هنا

Iterative Refinement for $ell_p$-norm Regression

115   0   0.0 ( 0 )
 نشر من قبل Rasmus J Kyng
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We give improved algorithms for the $ell_{p}$-regression problem, $min_{x} |x|_{p}$ such that $A x=b,$ for all $p in (1,2) cup (2,infty).$ Our algorithms obtain a high accuracy solution in $tilde{O}_{p}(m^{frac{|p-2|}{2p + |p-2|}}) le tilde{O}_{p}(m^{frac{1}{3}})$ iterations, where each iteration requires solving an $m times m$ linear system, $m$ being the dimension of the ambient space. By maintaining an approximate inverse of the linear systems that we solve in each iteration, we give algorithms for solving $ell_{p}$-regression to $1 / text{poly}(n)$ accuracy that run in time $tilde{O}_p(m^{max{omega, 7/3}}),$ where $omega$ is the matrix multiplication constant. For the current best value of $omega > 2.37$, we can thus solve $ell_{p}$ regression as fast as $ell_{2}$ regression, for all constant $p$ bounded away from $1.$ Our algorithms can be combined with fast graph Laplacian linear equation solvers to give minimum $ell_{p}$-norm flow / voltage solutions to $1 / text{poly}(n)$ accuracy on an undirected graph with $m$ edges in $tilde{O}_{p}(m^{1 + frac{|p-2|}{2p + |p-2|}}) le tilde{O}_{p}(m^{frac{4}{3}})$ time. For sparse graphs and for matrices with similar dimensions, our iteration counts and running times improve on the $p$-norm regression algorithm by [Bubeck-Cohen-Lee-Li STOC`18] and general-purpose convex optimization algorithms. At the core of our algorithms is an iterative refinement scheme for $ell_{p}$-norms, using the smoothed $ell_{p}$-norms introduced in the work of Bubeck et al. Given an initial solution, we construct a problem that seeks to minimize a quadratically-smoothed $ell_{p}$ norm over a subspace, such that a crude solution to this problem allows us to improve the initial solution by a constant factor, leading to algorithms with fast convergence.



قيم البحث

اقرأ أيضاً

We give almost-linear-time algorithms for constructing sparsifiers with $n poly(log n)$ edges that approximately preserve weighted $(ell^{2}_2 + ell^{p}_p)$ flow or voltage objectives on graphs. For flow objectives, this is the first sparsifier const ruction for such mixed objectives beyond unit $ell_p$ weights, and is based on expander decompositions. For voltage objectives, we give the first sparsifier construction for these objectives, which we build using graph spanners and leverage score sampling. Together with the iterative refinement framework of [Adil et al, SODA 2019], and a new multiplicative-weights based constant-approximation algorithm for mixed-objective flows or voltages, we show how to find $(1+2^{-text{poly}(log n)})$ approximations for weighted $ell_p$-norm minimizing flows or voltages in $p(m^{1+o(1)} + n^{4/3 + o(1)})$ time for $p=omega(1),$ which is almost-linear for graphs that are slightly dense ($m ge n^{4/3 + o(1)}$).
We present faster high-accuracy algorithms for computing $ell_p$-norm minimizing flows. On a graph with $m$ edges, our algorithm can compute a $(1+1/text{poly}(m))$-approximate unweighted $ell_p$-norm minimizing flow with $pm^{1+frac{1}{p-1}+o(1)}$ o perations, for any $p ge 2,$ giving the best bound for all $pgtrsim 5.24.$ Combined with the algorithm from the work of Adil et al. (SODA 19), we can now compute such flows for any $2le ple m^{o(1)}$ in time at most $O(m^{1.24}).$ In comparison, the previous best running time was $Omega(m^{1.33})$ for large constant $p.$ For $psimdelta^{-1}log m,$ our algorithm computes a $(1+delta)$-approximate maximum flow on undirected graphs using $m^{1+o(1)}delta^{-1}$ operations, matching the current best bound, albeit only for unit-capacity graphs. We also give an algorithm for solving general $ell_{p}$-norm regression problems for large $p.$ Our algorithm makes $pm^{frac{1}{3}+o(1)}log^2(1/varepsilon)$ calls to a linear solver. This gives the first high-accuracy algorithm for computing weighted $ell_{p}$-norm minimizing flows that runs in time $o(m^{1.5})$ for some $p=m^{Omega(1)}.$ Our key technical contribution is to show that smoothed $ell_p$-norm problems introduced by Adil et al., are interreducible for different values of $p.$ No such reduction is known for standard $ell_p$-norm problems.
Linear regression in $ell_p$-norm is a canonical optimization problem that arises in several applications, including sparse recovery, semi-supervised learning, and signal processing. Generic convex optimization algorithms for solving $ell_p$-regressi on are slow in practice. Iteratively Reweighted Least Squares (IRLS) is an easy to implement family of algorithms for solving these problems that has been studied for over 50 years. However, these algorithms often diverge for p > 3, and since the work of Osborne (1985), it has been an open problem whether there is an IRLS algorithm that is guaranteed to converge rapidly for p > 3. We propose p-IRLS, the first IRLS algorithm that provably converges geometrically for any $p in [2,infty).$ Our algorithm is simple to implement and is guaranteed to find a $(1+varepsilon)$-approximate solution in $O(p^{3.5} m^{frac{p-2}{2(p-1)}} log frac{m}{varepsilon}) le O_p(sqrt{m} log frac{m}{varepsilon} )$ iterations. Our experiments demonstrate that it performs even better than our theoretical bounds, beats the standard Matlab/CVX implementation for solving these problems by 10--50x, and is the fastest among available implementations in the high-accuracy regime.
In many signal processing applications, the aim is to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as atoms allow us to define atomic norms that can be used to formulate convex regularizations for the reconstruction problem. Efficient algorithms are available to solve these formulations in certain special cases, but an approach that works well for general atomic norms, both in terms of speed and reconstruction accuracy, remains to be found. This paper describes an optimization algorithm called CoGEnT that produces solutions with succinct atomic representations for reconstruction problems, generally formulated with atomic-norm constraints. CoGEnT combines a greedy selection scheme based on the conditional gradient approach with a backward (or truncation) step that exploits the quadratic nature of the objective to reduce the basis size. We establish convergence properties and validate the algorithm via extensive numerical experiments on a suite of signal processing applications. Our algorithm and analysis also allow for inexact forward steps and for occasional enhancements of the current representation to be performed. CoGEnT can outperform the basic conditional gradient method, and indeed many methods that are tailored to specific applications, when the enhancement and truncation steps are defined appropriately. We also introduce several novel applications that are enabled by the atomic-norm framework, including tensor completion, moment problems in signal processing, and graph deconvolution.
125 - Sebastiano Vigna 2014
Step-asynchronous successive overrelaxation updates the values contained in a single vector using the usual Gauss-Seidel-like weighted rule, but arbitrarily mixing old and new values, the only constraint being temporal coherence: you cannot use a val ue before it has been computed. We show that given a nonnegative real matrix $A$, a $sigmageqrho(A)$ and a vector $boldsymbol w>0$ such that $Aboldsymbol wleqsigmaboldsymbol w$, every iteration of step-asynchronous successive overrelaxation for the problem $(sI- A)boldsymbol x=boldsymbol b$, with $s >sigma$, reduces geometrically the $boldsymbol w$-norm of the current error by a factor that we can compute explicitly. Then, we show that given a $sigma>rho(A)$ it is in principle always possible to compute such a $boldsymbol w$. This property makes it possible to estimate the supremum norm of the absolute error at each iteration without any additional hypothesis on $A$, even when $A$ is so large that computing the product $Aboldsymbol x$ is feasible, but estimating the supremum norm of $(sI-A)^{-1}$ is not.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا