ترغب بنشر مسار تعليمي؟ اضغط هنا

Forward - Backward Greedy Algorithms for Atomic Norm Regularization

133   0   0.0 ( 0 )
 نشر من قبل Nikhil Rao
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In many signal processing applications, the aim is to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as atoms allow us to define atomic norms that can be used to formulate convex regularizations for the reconstruction problem. Efficient algorithms are available to solve these formulations in certain special cases, but an approach that works well for general atomic norms, both in terms of speed and reconstruction accuracy, remains to be found. This paper describes an optimization algorithm called CoGEnT that produces solutions with succinct atomic representations for reconstruction problems, generally formulated with atomic-norm constraints. CoGEnT combines a greedy selection scheme based on the conditional gradient approach with a backward (or truncation) step that exploits the quadratic nature of the objective to reduce the basis size. We establish convergence properties and validate the algorithm via extensive numerical experiments on a suite of signal processing applications. Our algorithm and analysis also allow for inexact forward steps and for occasional enhancements of the current representation to be performed. CoGEnT can outperform the basic conditional gradient method, and indeed many methods that are tailored to specific applications, when the enhancement and truncation steps are defined appropriately. We also introduce several novel applications that are enabled by the atomic-norm framework, including tensor completion, moment problems in signal processing, and graph deconvolution.



قيم البحث

اقرأ أيضاً

We present SimultaneousGreedys, a deterministic algorithm for constrained submodular maximization. At a high level, the algorithm maintains $ell$ solutions and greedily updates them in a simultaneous fashion. SimultaneousGreedys achieves the tightest known approximation guarantees for both $k$-extendible systems and the more general $k$-systems, which are $(k+1)^2/k = k + mathcal{O}(1)$ and $(1 + sqrt{k+2})^2 = k + mathcal{O}(sqrt{k})$, respectively. This is in contrast to previous algorithms, which are designed to provide tight approximation guarantees in one setting, but not both. We also improve the analysis of RepeatedGreedy, showing that it achieves an approximation ratio of $k + mathcal{O}(sqrt{k})$ for $k$-systems when allowed to run for $mathcal{O}(sqrt{k})$ iterations, an improvement in both the runtime and approximation over previous analyses. We demonstrate that both algorithms may be modified to run in nearly linear time with an arbitrarily small loss in the approximation. Both SimultaneousGreedys and RepeatedGreedy are flexible enough to incorporate the intersection of $m$ additional knapsack constraints, while retaining similar approximation guarantees: both algorithms yield an approximation guarantee of roughly $k + 2m + mathcal{O}(sqrt{k+m})$ for $k$-systems and SimultaneousGreedys enjoys an improved approximation guarantee of $k+2m + mathcal{O}(sqrt{m})$ for $k$-extendible systems. To complement our algorithmic contributions, we provide a hardness result which states that no algorithm making polynomially many oracle queries can achieve an approximation better than $k + 1/2 + varepsilon$. We also present SubmodularGreedy.jl, a Julia package which implements these algorithms and may be downloaded at https://github.com/crharshaw/SubmodularGreedy.jl . Finally, we test the effectiveness of these algorithms on real datasets.
We give improved algorithms for the $ell_{p}$-regression problem, $min_{x} |x|_{p}$ such that $A x=b,$ for all $p in (1,2) cup (2,infty).$ Our algorithms obtain a high accuracy solution in $tilde{O}_{p}(m^{frac{|p-2|}{2p + |p-2|}}) le tilde{O}_{p}(m^ {frac{1}{3}})$ iterations, where each iteration requires solving an $m times m$ linear system, $m$ being the dimension of the ambient space. By maintaining an approximate inverse of the linear systems that we solve in each iteration, we give algorithms for solving $ell_{p}$-regression to $1 / text{poly}(n)$ accuracy that run in time $tilde{O}_p(m^{max{omega, 7/3}}),$ where $omega$ is the matrix multiplication constant. For the current best value of $omega > 2.37$, we can thus solve $ell_{p}$ regression as fast as $ell_{2}$ regression, for all constant $p$ bounded away from $1.$ Our algorithms can be combined with fast graph Laplacian linear equation solvers to give minimum $ell_{p}$-norm flow / voltage solutions to $1 / text{poly}(n)$ accuracy on an undirected graph with $m$ edges in $tilde{O}_{p}(m^{1 + frac{|p-2|}{2p + |p-2|}}) le tilde{O}_{p}(m^{frac{4}{3}})$ time. For sparse graphs and for matrices with similar dimensions, our iteration counts and running times improve on the $p$-norm regression algorithm by [Bubeck-Cohen-Lee-Li STOC`18] and general-purpose convex optimization algorithms. At the core of our algorithms is an iterative refinement scheme for $ell_{p}$-norms, using the smoothed $ell_{p}$-norms introduced in the work of Bubeck et al. Given an initial solution, we construct a problem that seeks to minimize a quadratically-smoothed $ell_{p}$ norm over a subspace, such that a crude solution to this problem allows us to improve the initial solution by a constant factor, leading to algorithms with fast convergence.
MAXCUT defines a classical NP-hard problem for graph partitioning and it serves as a typical case of the symmetric non-monotone Unconstrained Submodular Maximization (USM) problem. Applications of MAXCUT are abundant in machine learning, computer vis ion and statistical physics. Greedy algorithms to approximately solve MAXCUT rely on greedy vertex labelling or on an edge contraction strategy. These algorithms have been studied by measuring their approximation ratios in the worst case setting but very little is known to characterize their robustness to noise contaminations of the input data in the average case. Adapting the framework of Approximation Set Coding, we present a method to exactly measure the cardinality of the algorithmic approximation sets of five greedy MAXCUT algorithms. Their information contents are explored for graph instances generated by two different noise models: the edge reversal model and Gaussian edge weights model. The results provide insights into the robustness of different greedy heuristics and techniques for MAXCUT, which can be used for algorithm design of general USM problems.
Motivated by the use of high speed circuit switches in large scale data centers, we consider the problem of circuit switch scheduling. In this problem we are given demands between pairs of servers and the goal is to schedule at every time step a matc hing between the servers while maximizing the total satisfied demand over time. The crux of this scheduling problem is that once one shifts from one matching to a different one a fixed delay $delta$ is incurred during which no data can be transmitted. For the offline version of the problem we present a $(1-frac{1}{e}-epsilon)$ approximation ratio (for any constant $epsilon >0$). Since the natural linear programming relaxation for the problem has an unbounded integrality gap, we adopt a hybrid approach that combines the combinatorial greedy with randomized rounding of a different suitable linear program. For the online version of the problem we present a (bi-criteria) $ ((e-1)/(2e-1)-epsilon)$-competitive ratio (for any constant $epsilon >0$ ) that exceeds time by an additive factor of $O(frac{delta}{epsilon})$. We note that no uni-criteria online algorithm is possible. Surprisingly, we obtain the result by reducing the online version to the offline one.
$ $In many optimization problems, a feasible solution induces a multi-dimensional cost vector. For example, in load-balancing a schedule induces a load vector across the machines. In $k$-clustering, opening $k$ facilities induces an assignment cost v ector across the clients. In this paper we consider the following minimum norm optimization problem : Given an arbitrary monotone, symmetric norm, find a solution which minimizes the norm of the induced cost-vector. This generalizes many fundamental NP-hard problems. We give a general framework to tackle the minimum norm problem, and illustrate its efficacy in the unrelated machine load balancing and $k$-clustering setting. Our concrete results are the following. $bullet$ We give constant factor approximation algorithms for the minimum norm load balancing problem in unrelated machines, and the minimum norm $k$-clustering problem. To our knowledge, our results constitute the first constant-factor approximations for such a general suite of objectives. $bullet$ In load balancing with unrelated machines, we give a $2$-approximation for the problem of finding an assignment minimizing the sum of the largest $ell$ loads, for any $ell$. We give a $(2+varepsilon)$-approximation for the so-called ordered load-balancing problem. $bullet$ For $k$-clustering, we give a $(5+varepsilon)$-approximation for the ordered $k$-median problem significantly improving the constant factor approximations from Byrka, Sornat, and Spoerhase (STOC 2018) and Chakrabarty and Swamy (ICALP 2018). $bullet$ Our techniques also imply $O(1)$ approximations to the best simultaneous optimization factor for any instance of the unrelated machine load-balancing and the $k$-clustering setting. To our knowledge, these are the first positive simultaneous optimization results in these settings.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا